00:00:00.001 Started by upstream project "autotest-nightly" build number 4127 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3489 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.052 Fetching changes from the remote Git repository 00:00:00.059 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.095 Using shallow fetch with depth 1 00:00:00.095 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.095 > git --version # timeout=10 00:00:00.145 > git --version # 'git version 2.39.2' 00:00:00.145 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.214 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.214 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.656 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.669 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.681 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:03.681 > git config core.sparsecheckout # timeout=10 00:00:03.693 > git read-tree -mu HEAD # timeout=10 00:00:03.709 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:03.729 Commit message: "kid: add issue 3541" 00:00:03.729 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:03.856 [Pipeline] Start of Pipeline 00:00:03.868 [Pipeline] library 00:00:03.869 Loading library shm_lib@master 00:00:03.869 Library shm_lib@master is cached. Copying from home. 00:00:03.882 [Pipeline] node 00:00:03.893 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.895 [Pipeline] { 00:00:03.903 [Pipeline] catchError 00:00:03.904 [Pipeline] { 00:00:03.915 [Pipeline] wrap 00:00:03.924 [Pipeline] { 00:00:03.932 [Pipeline] stage 00:00:03.934 [Pipeline] { (Prologue) 00:00:03.951 [Pipeline] echo 00:00:03.953 Node: VM-host-WFP7 00:00:03.959 [Pipeline] cleanWs 00:00:03.971 [WS-CLEANUP] Deleting project workspace... 00:00:03.971 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.980 [WS-CLEANUP] done 00:00:04.177 [Pipeline] setCustomBuildProperty 00:00:04.269 [Pipeline] httpRequest 00:00:04.698 [Pipeline] echo 00:00:04.699 Sorcerer 10.211.164.101 is alive 00:00:04.709 [Pipeline] retry 00:00:04.711 [Pipeline] { 00:00:04.725 [Pipeline] httpRequest 00:00:04.730 HttpMethod: GET 00:00:04.730 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.731 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.732 Response Code: HTTP/1.1 200 OK 00:00:04.733 Success: Status code 200 is in the accepted range: 200,404 00:00:04.733 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.942 [Pipeline] } 00:00:04.958 [Pipeline] // retry 00:00:04.965 [Pipeline] sh 00:00:05.253 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:05.269 [Pipeline] httpRequest 00:00:05.643 [Pipeline] echo 00:00:05.644 Sorcerer 10.211.164.101 is alive 00:00:05.655 [Pipeline] retry 00:00:05.657 [Pipeline] { 00:00:05.672 [Pipeline] httpRequest 00:00:05.677 HttpMethod: GET 00:00:05.678 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:05.678 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:05.681 Response Code: HTTP/1.1 200 OK 00:00:05.682 Success: Status code 200 is in the accepted range: 200,404 00:00:05.682 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:57.889 [Pipeline] } 00:00:57.911 [Pipeline] // retry 00:00:57.919 [Pipeline] sh 00:00:58.209 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:00.764 [Pipeline] sh 00:01:01.051 + git -C spdk log --oneline -n5 00:01:01.051 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:01.051 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:01.051 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:01.051 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:01.051 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:01.071 [Pipeline] writeFile 00:01:01.086 [Pipeline] sh 00:01:01.371 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:01.382 [Pipeline] sh 00:01:01.663 + cat autorun-spdk.conf 00:01:01.663 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.663 SPDK_RUN_ASAN=1 00:01:01.663 SPDK_RUN_UBSAN=1 00:01:01.663 SPDK_TEST_RAID=1 00:01:01.663 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.671 RUN_NIGHTLY=1 00:01:01.673 [Pipeline] } 00:01:01.688 [Pipeline] // stage 00:01:01.697 [Pipeline] stage 00:01:01.698 [Pipeline] { (Run VM) 00:01:01.709 [Pipeline] sh 00:01:01.994 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:01.994 + echo 'Start stage prepare_nvme.sh' 00:01:01.994 Start stage prepare_nvme.sh 00:01:01.994 + [[ -n 0 ]] 00:01:01.994 + disk_prefix=ex0 00:01:01.994 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:01.994 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:01.994 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:01.994 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.994 ++ SPDK_RUN_ASAN=1 00:01:01.994 ++ SPDK_RUN_UBSAN=1 00:01:01.994 ++ SPDK_TEST_RAID=1 00:01:01.994 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.994 ++ RUN_NIGHTLY=1 00:01:01.994 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:01.994 + nvme_files=() 00:01:01.994 + declare -A nvme_files 00:01:01.994 + backend_dir=/var/lib/libvirt/images/backends 00:01:01.994 + nvme_files['nvme.img']=5G 00:01:01.994 + nvme_files['nvme-cmb.img']=5G 00:01:01.994 + nvme_files['nvme-multi0.img']=4G 00:01:01.994 + nvme_files['nvme-multi1.img']=4G 00:01:01.994 + nvme_files['nvme-multi2.img']=4G 00:01:01.994 + nvme_files['nvme-openstack.img']=8G 00:01:01.994 + nvme_files['nvme-zns.img']=5G 00:01:01.994 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:01.994 + (( SPDK_TEST_FTL == 1 )) 00:01:01.994 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:01.994 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:01.994 + for nvme in "${!nvme_files[@]}" 00:01:01.994 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:01.994 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.994 + for nvme in "${!nvme_files[@]}" 00:01:01.994 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:01.994 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.994 + for nvme in "${!nvme_files[@]}" 00:01:01.994 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:01.994 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:01.994 + for nvme in "${!nvme_files[@]}" 00:01:01.994 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:01.994 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.994 + for nvme in "${!nvme_files[@]}" 00:01:01.994 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:01.994 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.994 + for nvme in "${!nvme_files[@]}" 00:01:01.994 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:01.994 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.994 + for nvme in "${!nvme_files[@]}" 00:01:01.994 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:02.254 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.254 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:02.254 + echo 'End stage prepare_nvme.sh' 00:01:02.254 End stage prepare_nvme.sh 00:01:02.267 [Pipeline] sh 00:01:02.552 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:02.552 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:02.552 00:01:02.552 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:02.552 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:02.552 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:02.552 HELP=0 00:01:02.552 DRY_RUN=0 00:01:02.552 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:02.552 NVME_DISKS_TYPE=nvme,nvme, 00:01:02.552 NVME_AUTO_CREATE=0 00:01:02.552 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:02.552 NVME_CMB=,, 00:01:02.552 NVME_PMR=,, 00:01:02.552 NVME_ZNS=,, 00:01:02.552 NVME_MS=,, 00:01:02.552 NVME_FDP=,, 00:01:02.552 SPDK_VAGRANT_DISTRO=fedora39 00:01:02.552 SPDK_VAGRANT_VMCPU=10 00:01:02.552 SPDK_VAGRANT_VMRAM=12288 00:01:02.552 SPDK_VAGRANT_PROVIDER=libvirt 00:01:02.552 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:02.552 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:02.552 SPDK_OPENSTACK_NETWORK=0 00:01:02.552 VAGRANT_PACKAGE_BOX=0 00:01:02.552 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:02.552 FORCE_DISTRO=true 00:01:02.552 VAGRANT_BOX_VERSION= 00:01:02.552 EXTRA_VAGRANTFILES= 00:01:02.552 NIC_MODEL=virtio 00:01:02.552 00:01:02.552 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:02.552 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:04.463 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.723 ==> default: Creating image (snapshot of base box volume). 00:01:04.984 ==> default: Creating domain with the following settings... 00:01:04.984 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727539339_8680de52b18dc6e2bfc4 00:01:04.984 ==> default: -- Domain type: kvm 00:01:04.984 ==> default: -- Cpus: 10 00:01:04.984 ==> default: -- Feature: acpi 00:01:04.984 ==> default: -- Feature: apic 00:01:04.984 ==> default: -- Feature: pae 00:01:04.984 ==> default: -- Memory: 12288M 00:01:04.984 ==> default: -- Memory Backing: hugepages: 00:01:04.984 ==> default: -- Management MAC: 00:01:04.984 ==> default: -- Loader: 00:01:04.984 ==> default: -- Nvram: 00:01:04.984 ==> default: -- Base box: spdk/fedora39 00:01:04.984 ==> default: -- Storage pool: default 00:01:04.984 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727539339_8680de52b18dc6e2bfc4.img (20G) 00:01:04.984 ==> default: -- Volume Cache: default 00:01:04.984 ==> default: -- Kernel: 00:01:04.984 ==> default: -- Initrd: 00:01:04.984 ==> default: -- Graphics Type: vnc 00:01:04.984 ==> default: -- Graphics Port: -1 00:01:04.984 ==> default: -- Graphics IP: 127.0.0.1 00:01:04.984 ==> default: -- Graphics Password: Not defined 00:01:04.984 ==> default: -- Video Type: cirrus 00:01:04.984 ==> default: -- Video VRAM: 9216 00:01:04.984 ==> default: -- Sound Type: 00:01:04.984 ==> default: -- Keymap: en-us 00:01:04.984 ==> default: -- TPM Path: 00:01:04.984 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:04.984 ==> default: -- Command line args: 00:01:04.984 ==> default: -> value=-device, 00:01:04.984 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:04.984 ==> default: -> value=-drive, 00:01:04.984 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:04.984 ==> default: -> value=-device, 00:01:04.984 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.984 ==> default: -> value=-device, 00:01:04.984 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:04.984 ==> default: -> value=-drive, 00:01:04.984 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:04.984 ==> default: -> value=-device, 00:01:04.984 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.984 ==> default: -> value=-drive, 00:01:04.984 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:04.984 ==> default: -> value=-device, 00:01:04.984 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.985 ==> default: -> value=-drive, 00:01:04.985 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:04.985 ==> default: -> value=-device, 00:01:04.985 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.985 ==> default: Creating shared folders metadata... 00:01:04.985 ==> default: Starting domain. 00:01:06.368 ==> default: Waiting for domain to get an IP address... 00:01:24.480 ==> default: Waiting for SSH to become available... 00:01:24.480 ==> default: Configuring and enabling network interfaces... 00:01:29.765 default: SSH address: 192.168.121.212:22 00:01:29.765 default: SSH username: vagrant 00:01:29.765 default: SSH auth method: private key 00:01:32.304 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:40.446 ==> default: Mounting SSHFS shared folder... 00:01:42.352 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:42.352 ==> default: Checking Mount.. 00:01:44.256 ==> default: Folder Successfully Mounted! 00:01:44.256 ==> default: Running provisioner: file... 00:01:45.194 default: ~/.gitconfig => .gitconfig 00:01:45.764 00:01:45.764 SUCCESS! 00:01:45.764 00:01:45.764 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:45.764 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:45.764 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:45.764 00:01:45.773 [Pipeline] } 00:01:45.787 [Pipeline] // stage 00:01:45.795 [Pipeline] dir 00:01:45.795 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:45.797 [Pipeline] { 00:01:45.808 [Pipeline] catchError 00:01:45.810 [Pipeline] { 00:01:45.821 [Pipeline] sh 00:01:46.103 + vagrant ssh-config --host vagrant 00:01:46.103 + sed -ne /^Host/,$p 00:01:46.103 + tee ssh_conf 00:01:48.637 Host vagrant 00:01:48.637 HostName 192.168.121.212 00:01:48.637 User vagrant 00:01:48.637 Port 22 00:01:48.637 UserKnownHostsFile /dev/null 00:01:48.637 StrictHostKeyChecking no 00:01:48.637 PasswordAuthentication no 00:01:48.637 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:48.637 IdentitiesOnly yes 00:01:48.637 LogLevel FATAL 00:01:48.637 ForwardAgent yes 00:01:48.637 ForwardX11 yes 00:01:48.637 00:01:48.649 [Pipeline] withEnv 00:01:48.652 [Pipeline] { 00:01:48.665 [Pipeline] sh 00:01:48.948 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:48.948 source /etc/os-release 00:01:48.948 [[ -e /image.version ]] && img=$(< /image.version) 00:01:48.948 # Minimal, systemd-like check. 00:01:48.948 if [[ -e /.dockerenv ]]; then 00:01:48.948 # Clear garbage from the node's name: 00:01:48.948 # agt-er_autotest_547-896 -> autotest_547-896 00:01:48.948 # $HOSTNAME is the actual container id 00:01:48.948 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:48.948 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:48.948 # We can assume this is a mount from a host where container is running, 00:01:48.948 # so fetch its hostname to easily identify the target swarm worker. 00:01:48.948 container="$(< /etc/hostname) ($agent)" 00:01:48.948 else 00:01:48.948 # Fallback 00:01:48.948 container=$agent 00:01:48.948 fi 00:01:48.948 fi 00:01:48.948 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:48.948 00:01:49.221 [Pipeline] } 00:01:49.238 [Pipeline] // withEnv 00:01:49.246 [Pipeline] setCustomBuildProperty 00:01:49.261 [Pipeline] stage 00:01:49.263 [Pipeline] { (Tests) 00:01:49.280 [Pipeline] sh 00:01:49.563 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.836 [Pipeline] sh 00:01:50.119 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:50.394 [Pipeline] timeout 00:01:50.394 Timeout set to expire in 1 hr 30 min 00:01:50.396 [Pipeline] { 00:01:50.409 [Pipeline] sh 00:01:50.691 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:51.261 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:51.273 [Pipeline] sh 00:01:51.608 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.883 [Pipeline] sh 00:01:52.167 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:52.445 [Pipeline] sh 00:01:52.732 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:52.992 ++ readlink -f spdk_repo 00:01:52.992 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.992 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.992 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.992 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.992 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.992 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.992 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.992 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:52.992 + cd /home/vagrant/spdk_repo 00:01:52.992 + source /etc/os-release 00:01:52.992 ++ NAME='Fedora Linux' 00:01:52.992 ++ VERSION='39 (Cloud Edition)' 00:01:52.992 ++ ID=fedora 00:01:52.992 ++ VERSION_ID=39 00:01:52.992 ++ VERSION_CODENAME= 00:01:52.992 ++ PLATFORM_ID=platform:f39 00:01:52.992 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.992 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.992 ++ LOGO=fedora-logo-icon 00:01:52.992 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.992 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.992 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.992 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.992 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.992 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.992 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.992 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.992 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.992 ++ SUPPORT_END=2024-11-12 00:01:52.992 ++ VARIANT='Cloud Edition' 00:01:52.992 ++ VARIANT_ID=cloud 00:01:52.992 + uname -a 00:01:52.992 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.993 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:53.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:53.564 Hugepages 00:01:53.564 node hugesize free / total 00:01:53.564 node0 1048576kB 0 / 0 00:01:53.564 node0 2048kB 0 / 0 00:01:53.564 00:01:53.564 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.564 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:53.564 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:53.564 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:53.824 + rm -f /tmp/spdk-ld-path 00:01:53.824 + source autorun-spdk.conf 00:01:53.824 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.824 ++ SPDK_RUN_ASAN=1 00:01:53.824 ++ SPDK_RUN_UBSAN=1 00:01:53.824 ++ SPDK_TEST_RAID=1 00:01:53.824 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.824 ++ RUN_NIGHTLY=1 00:01:53.824 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.824 + [[ -n '' ]] 00:01:53.824 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:53.824 + for M in /var/spdk/build-*-manifest.txt 00:01:53.824 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:53.824 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.824 + for M in /var/spdk/build-*-manifest.txt 00:01:53.824 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.825 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.825 + for M in /var/spdk/build-*-manifest.txt 00:01:53.825 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.825 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.825 ++ uname 00:01:53.825 + [[ Linux == \L\i\n\u\x ]] 00:01:53.825 + sudo dmesg -T 00:01:53.825 + sudo dmesg --clear 00:01:53.825 + dmesg_pid=5432 00:01:53.825 + [[ Fedora Linux == FreeBSD ]] 00:01:53.825 + sudo dmesg -Tw 00:01:53.825 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.825 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.825 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.825 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.825 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.825 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.825 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.825 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.825 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.825 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.825 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.825 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.825 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.825 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.825 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:53.825 Test configuration: 00:01:53.825 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.825 SPDK_RUN_ASAN=1 00:01:53.825 SPDK_RUN_UBSAN=1 00:01:53.825 SPDK_TEST_RAID=1 00:01:53.825 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.085 RUN_NIGHTLY=1 16:03:08 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:54.085 16:03:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:54.085 16:03:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:54.085 16:03:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:54.085 16:03:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:54.085 16:03:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:54.085 16:03:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.086 16:03:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.086 16:03:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.086 16:03:08 -- paths/export.sh@5 -- $ export PATH 00:01:54.086 16:03:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.086 16:03:08 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:54.086 16:03:08 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:54.086 16:03:08 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727539388.XXXXXX 00:01:54.086 16:03:08 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727539388.lcZAWK 00:01:54.086 16:03:08 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:54.086 16:03:08 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:54.086 16:03:08 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:54.086 16:03:08 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:54.086 16:03:08 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:54.086 16:03:08 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:54.086 16:03:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:54.086 16:03:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.086 16:03:08 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:54.086 16:03:08 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:54.086 16:03:08 -- pm/common@17 -- $ local monitor 00:01:54.086 16:03:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.086 16:03:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.086 16:03:08 -- pm/common@25 -- $ sleep 1 00:01:54.086 16:03:08 -- pm/common@21 -- $ date +%s 00:01:54.086 16:03:08 -- pm/common@21 -- $ date +%s 00:01:54.086 16:03:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727539388 00:01:54.086 16:03:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727539388 00:01:54.086 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727539388_collect-cpu-load.pm.log 00:01:54.086 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727539388_collect-vmstat.pm.log 00:01:55.026 16:03:09 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:55.026 16:03:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:55.026 16:03:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:55.026 16:03:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:55.026 16:03:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:55.026 Sat Sep 28 04:03:09 PM UTC 2024 00:01:55.026 16:03:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:55.026 v25.01-pre-17-g09cc66129 00:01:55.026 16:03:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:55.026 16:03:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:55.026 16:03:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:55.026 16:03:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:55.027 16:03:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.027 ************************************ 00:01:55.027 START TEST asan 00:01:55.027 ************************************ 00:01:55.027 using asan 00:01:55.027 16:03:09 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:55.027 00:01:55.027 real 0m0.001s 00:01:55.027 user 0m0.000s 00:01:55.027 sys 0m0.000s 00:01:55.027 16:03:09 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:55.027 16:03:09 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:55.027 ************************************ 00:01:55.027 END TEST asan 00:01:55.027 ************************************ 00:01:55.287 16:03:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:55.287 16:03:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:55.287 16:03:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:55.287 16:03:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:55.287 16:03:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.287 ************************************ 00:01:55.287 START TEST ubsan 00:01:55.287 ************************************ 00:01:55.287 using ubsan 00:01:55.287 16:03:09 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:55.287 00:01:55.287 real 0m0.000s 00:01:55.287 user 0m0.000s 00:01:55.287 sys 0m0.000s 00:01:55.287 16:03:09 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:55.287 16:03:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:55.287 ************************************ 00:01:55.287 END TEST ubsan 00:01:55.287 ************************************ 00:01:55.287 16:03:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:55.287 16:03:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:55.287 16:03:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:55.287 16:03:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:55.287 16:03:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:55.287 16:03:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:55.287 16:03:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:55.287 16:03:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:55.287 16:03:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:55.287 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:55.287 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:55.857 Using 'verbs' RDMA provider 00:02:14.905 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.799 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.799 Creating mk/config.mk...done. 00:02:29.799 Creating mk/cc.flags.mk...done. 00:02:29.799 Type 'make' to build. 00:02:29.799 16:03:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:29.799 16:03:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:29.799 16:03:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:29.799 16:03:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.799 ************************************ 00:02:29.799 START TEST make 00:02:29.799 ************************************ 00:02:29.799 16:03:42 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:29.799 make[1]: Nothing to be done for 'all'. 00:02:37.925 The Meson build system 00:02:37.925 Version: 1.5.0 00:02:37.925 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:37.925 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:37.925 Build type: native build 00:02:37.925 Program cat found: YES (/usr/bin/cat) 00:02:37.925 Project name: DPDK 00:02:37.925 Project version: 24.03.0 00:02:37.925 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.925 C linker for the host machine: cc ld.bfd 2.40-14 00:02:37.925 Host machine cpu family: x86_64 00:02:37.925 Host machine cpu: x86_64 00:02:37.925 Message: ## Building in Developer Mode ## 00:02:37.925 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.925 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:37.925 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.925 Program python3 found: YES (/usr/bin/python3) 00:02:37.925 Program cat found: YES (/usr/bin/cat) 00:02:37.925 Compiler for C supports arguments -march=native: YES 00:02:37.925 Checking for size of "void *" : 8 00:02:37.925 Checking for size of "void *" : 8 (cached) 00:02:37.925 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:37.925 Library m found: YES 00:02:37.925 Library numa found: YES 00:02:37.925 Has header "numaif.h" : YES 00:02:37.925 Library fdt found: NO 00:02:37.925 Library execinfo found: NO 00:02:37.925 Has header "execinfo.h" : YES 00:02:37.925 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.925 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.926 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.926 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.926 Run-time dependency openssl found: YES 3.1.1 00:02:37.926 Run-time dependency libpcap found: YES 1.10.4 00:02:37.926 Has header "pcap.h" with dependency libpcap: YES 00:02:37.926 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.926 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.926 Compiler for C supports arguments -Wformat: YES 00:02:37.926 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.926 Compiler for C supports arguments -Wformat-security: NO 00:02:37.926 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.926 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.926 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.926 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.926 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.926 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.926 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.926 Compiler for C supports arguments -Wundef: YES 00:02:37.926 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.926 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.926 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.926 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.926 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.926 Program objdump found: YES (/usr/bin/objdump) 00:02:37.926 Compiler for C supports arguments -mavx512f: YES 00:02:37.926 Checking if "AVX512 checking" compiles: YES 00:02:37.926 Fetching value of define "__SSE4_2__" : 1 00:02:37.926 Fetching value of define "__AES__" : 1 00:02:37.926 Fetching value of define "__AVX__" : 1 00:02:37.926 Fetching value of define "__AVX2__" : 1 00:02:37.926 Fetching value of define "__AVX512BW__" : 1 00:02:37.926 Fetching value of define "__AVX512CD__" : 1 00:02:37.926 Fetching value of define "__AVX512DQ__" : 1 00:02:37.926 Fetching value of define "__AVX512F__" : 1 00:02:37.926 Fetching value of define "__AVX512VL__" : 1 00:02:37.926 Fetching value of define "__PCLMUL__" : 1 00:02:37.926 Fetching value of define "__RDRND__" : 1 00:02:37.926 Fetching value of define "__RDSEED__" : 1 00:02:37.926 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.926 Fetching value of define "__znver1__" : (undefined) 00:02:37.926 Fetching value of define "__znver2__" : (undefined) 00:02:37.926 Fetching value of define "__znver3__" : (undefined) 00:02:37.926 Fetching value of define "__znver4__" : (undefined) 00:02:37.926 Library asan found: YES 00:02:37.926 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.926 Message: lib/log: Defining dependency "log" 00:02:37.926 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.926 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.926 Library rt found: YES 00:02:37.926 Checking for function "getentropy" : NO 00:02:37.926 Message: lib/eal: Defining dependency "eal" 00:02:37.926 Message: lib/ring: Defining dependency "ring" 00:02:37.926 Message: lib/rcu: Defining dependency "rcu" 00:02:37.926 Message: lib/mempool: Defining dependency "mempool" 00:02:37.926 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.926 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.926 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.926 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.926 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.926 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.926 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.926 Compiler for C supports arguments -mpclmul: YES 00:02:37.926 Compiler for C supports arguments -maes: YES 00:02:37.926 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.926 Compiler for C supports arguments -mavx512bw: YES 00:02:37.926 Compiler for C supports arguments -mavx512dq: YES 00:02:37.926 Compiler for C supports arguments -mavx512vl: YES 00:02:37.926 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.926 Compiler for C supports arguments -mavx2: YES 00:02:37.926 Compiler for C supports arguments -mavx: YES 00:02:37.926 Message: lib/net: Defining dependency "net" 00:02:37.926 Message: lib/meter: Defining dependency "meter" 00:02:37.926 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.926 Message: lib/pci: Defining dependency "pci" 00:02:37.926 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.926 Message: lib/hash: Defining dependency "hash" 00:02:37.926 Message: lib/timer: Defining dependency "timer" 00:02:37.926 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.926 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.926 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.926 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.926 Message: lib/power: Defining dependency "power" 00:02:37.926 Message: lib/reorder: Defining dependency "reorder" 00:02:37.926 Message: lib/security: Defining dependency "security" 00:02:37.926 Has header "linux/userfaultfd.h" : YES 00:02:37.926 Has header "linux/vduse.h" : YES 00:02:37.926 Message: lib/vhost: Defining dependency "vhost" 00:02:37.926 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.926 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.926 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.926 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.926 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:37.926 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:37.926 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:37.926 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:37.926 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:37.926 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:37.926 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:37.926 Configuring doxy-api-html.conf using configuration 00:02:37.926 Configuring doxy-api-man.conf using configuration 00:02:37.926 Program mandb found: YES (/usr/bin/mandb) 00:02:37.926 Program sphinx-build found: NO 00:02:37.926 Configuring rte_build_config.h using configuration 00:02:37.926 Message: 00:02:37.926 ================= 00:02:37.926 Applications Enabled 00:02:37.926 ================= 00:02:37.926 00:02:37.926 apps: 00:02:37.926 00:02:37.926 00:02:37.926 Message: 00:02:37.926 ================= 00:02:37.926 Libraries Enabled 00:02:37.926 ================= 00:02:37.926 00:02:37.926 libs: 00:02:37.926 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:37.926 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:37.926 cryptodev, dmadev, power, reorder, security, vhost, 00:02:37.926 00:02:37.926 Message: 00:02:37.926 =============== 00:02:37.926 Drivers Enabled 00:02:37.926 =============== 00:02:37.926 00:02:37.926 common: 00:02:37.926 00:02:37.926 bus: 00:02:37.926 pci, vdev, 00:02:37.926 mempool: 00:02:37.926 ring, 00:02:37.926 dma: 00:02:37.926 00:02:37.926 net: 00:02:37.926 00:02:37.926 crypto: 00:02:37.926 00:02:37.926 compress: 00:02:37.926 00:02:37.926 vdpa: 00:02:37.926 00:02:37.926 00:02:37.926 Message: 00:02:37.926 ================= 00:02:37.926 Content Skipped 00:02:37.926 ================= 00:02:37.926 00:02:37.926 apps: 00:02:37.926 dumpcap: explicitly disabled via build config 00:02:37.926 graph: explicitly disabled via build config 00:02:37.926 pdump: explicitly disabled via build config 00:02:37.926 proc-info: explicitly disabled via build config 00:02:37.926 test-acl: explicitly disabled via build config 00:02:37.926 test-bbdev: explicitly disabled via build config 00:02:37.926 test-cmdline: explicitly disabled via build config 00:02:37.926 test-compress-perf: explicitly disabled via build config 00:02:37.926 test-crypto-perf: explicitly disabled via build config 00:02:37.926 test-dma-perf: explicitly disabled via build config 00:02:37.926 test-eventdev: explicitly disabled via build config 00:02:37.926 test-fib: explicitly disabled via build config 00:02:37.926 test-flow-perf: explicitly disabled via build config 00:02:37.926 test-gpudev: explicitly disabled via build config 00:02:37.926 test-mldev: explicitly disabled via build config 00:02:37.926 test-pipeline: explicitly disabled via build config 00:02:37.926 test-pmd: explicitly disabled via build config 00:02:37.926 test-regex: explicitly disabled via build config 00:02:37.926 test-sad: explicitly disabled via build config 00:02:37.926 test-security-perf: explicitly disabled via build config 00:02:37.926 00:02:37.926 libs: 00:02:37.926 argparse: explicitly disabled via build config 00:02:37.926 metrics: explicitly disabled via build config 00:02:37.926 acl: explicitly disabled via build config 00:02:37.926 bbdev: explicitly disabled via build config 00:02:37.926 bitratestats: explicitly disabled via build config 00:02:37.926 bpf: explicitly disabled via build config 00:02:37.926 cfgfile: explicitly disabled via build config 00:02:37.926 distributor: explicitly disabled via build config 00:02:37.926 efd: explicitly disabled via build config 00:02:37.926 eventdev: explicitly disabled via build config 00:02:37.926 dispatcher: explicitly disabled via build config 00:02:37.926 gpudev: explicitly disabled via build config 00:02:37.926 gro: explicitly disabled via build config 00:02:37.926 gso: explicitly disabled via build config 00:02:37.926 ip_frag: explicitly disabled via build config 00:02:37.926 jobstats: explicitly disabled via build config 00:02:37.926 latencystats: explicitly disabled via build config 00:02:37.926 lpm: explicitly disabled via build config 00:02:37.926 member: explicitly disabled via build config 00:02:37.926 pcapng: explicitly disabled via build config 00:02:37.926 rawdev: explicitly disabled via build config 00:02:37.926 regexdev: explicitly disabled via build config 00:02:37.926 mldev: explicitly disabled via build config 00:02:37.926 rib: explicitly disabled via build config 00:02:37.926 sched: explicitly disabled via build config 00:02:37.926 stack: explicitly disabled via build config 00:02:37.926 ipsec: explicitly disabled via build config 00:02:37.926 pdcp: explicitly disabled via build config 00:02:37.926 fib: explicitly disabled via build config 00:02:37.926 port: explicitly disabled via build config 00:02:37.927 pdump: explicitly disabled via build config 00:02:37.927 table: explicitly disabled via build config 00:02:37.927 pipeline: explicitly disabled via build config 00:02:37.927 graph: explicitly disabled via build config 00:02:37.927 node: explicitly disabled via build config 00:02:37.927 00:02:37.927 drivers: 00:02:37.927 common/cpt: not in enabled drivers build config 00:02:37.927 common/dpaax: not in enabled drivers build config 00:02:37.927 common/iavf: not in enabled drivers build config 00:02:37.927 common/idpf: not in enabled drivers build config 00:02:37.927 common/ionic: not in enabled drivers build config 00:02:37.927 common/mvep: not in enabled drivers build config 00:02:37.927 common/octeontx: not in enabled drivers build config 00:02:37.927 bus/auxiliary: not in enabled drivers build config 00:02:37.927 bus/cdx: not in enabled drivers build config 00:02:37.927 bus/dpaa: not in enabled drivers build config 00:02:37.927 bus/fslmc: not in enabled drivers build config 00:02:37.927 bus/ifpga: not in enabled drivers build config 00:02:37.927 bus/platform: not in enabled drivers build config 00:02:37.927 bus/uacce: not in enabled drivers build config 00:02:37.927 bus/vmbus: not in enabled drivers build config 00:02:37.927 common/cnxk: not in enabled drivers build config 00:02:37.927 common/mlx5: not in enabled drivers build config 00:02:37.927 common/nfp: not in enabled drivers build config 00:02:37.927 common/nitrox: not in enabled drivers build config 00:02:37.927 common/qat: not in enabled drivers build config 00:02:37.927 common/sfc_efx: not in enabled drivers build config 00:02:37.927 mempool/bucket: not in enabled drivers build config 00:02:37.927 mempool/cnxk: not in enabled drivers build config 00:02:37.927 mempool/dpaa: not in enabled drivers build config 00:02:37.927 mempool/dpaa2: not in enabled drivers build config 00:02:37.927 mempool/octeontx: not in enabled drivers build config 00:02:37.927 mempool/stack: not in enabled drivers build config 00:02:37.927 dma/cnxk: not in enabled drivers build config 00:02:37.927 dma/dpaa: not in enabled drivers build config 00:02:37.927 dma/dpaa2: not in enabled drivers build config 00:02:37.927 dma/hisilicon: not in enabled drivers build config 00:02:37.927 dma/idxd: not in enabled drivers build config 00:02:37.927 dma/ioat: not in enabled drivers build config 00:02:37.927 dma/skeleton: not in enabled drivers build config 00:02:37.927 net/af_packet: not in enabled drivers build config 00:02:37.927 net/af_xdp: not in enabled drivers build config 00:02:37.927 net/ark: not in enabled drivers build config 00:02:37.927 net/atlantic: not in enabled drivers build config 00:02:37.927 net/avp: not in enabled drivers build config 00:02:37.927 net/axgbe: not in enabled drivers build config 00:02:37.927 net/bnx2x: not in enabled drivers build config 00:02:37.927 net/bnxt: not in enabled drivers build config 00:02:37.927 net/bonding: not in enabled drivers build config 00:02:37.927 net/cnxk: not in enabled drivers build config 00:02:37.927 net/cpfl: not in enabled drivers build config 00:02:37.927 net/cxgbe: not in enabled drivers build config 00:02:37.927 net/dpaa: not in enabled drivers build config 00:02:37.927 net/dpaa2: not in enabled drivers build config 00:02:37.927 net/e1000: not in enabled drivers build config 00:02:37.927 net/ena: not in enabled drivers build config 00:02:37.927 net/enetc: not in enabled drivers build config 00:02:37.927 net/enetfec: not in enabled drivers build config 00:02:37.927 net/enic: not in enabled drivers build config 00:02:37.927 net/failsafe: not in enabled drivers build config 00:02:37.927 net/fm10k: not in enabled drivers build config 00:02:37.927 net/gve: not in enabled drivers build config 00:02:37.927 net/hinic: not in enabled drivers build config 00:02:37.927 net/hns3: not in enabled drivers build config 00:02:37.927 net/i40e: not in enabled drivers build config 00:02:37.927 net/iavf: not in enabled drivers build config 00:02:37.927 net/ice: not in enabled drivers build config 00:02:37.927 net/idpf: not in enabled drivers build config 00:02:37.927 net/igc: not in enabled drivers build config 00:02:37.927 net/ionic: not in enabled drivers build config 00:02:37.927 net/ipn3ke: not in enabled drivers build config 00:02:37.927 net/ixgbe: not in enabled drivers build config 00:02:37.927 net/mana: not in enabled drivers build config 00:02:37.927 net/memif: not in enabled drivers build config 00:02:37.927 net/mlx4: not in enabled drivers build config 00:02:37.927 net/mlx5: not in enabled drivers build config 00:02:37.927 net/mvneta: not in enabled drivers build config 00:02:37.927 net/mvpp2: not in enabled drivers build config 00:02:37.927 net/netvsc: not in enabled drivers build config 00:02:37.927 net/nfb: not in enabled drivers build config 00:02:37.927 net/nfp: not in enabled drivers build config 00:02:37.927 net/ngbe: not in enabled drivers build config 00:02:37.927 net/null: not in enabled drivers build config 00:02:37.927 net/octeontx: not in enabled drivers build config 00:02:37.927 net/octeon_ep: not in enabled drivers build config 00:02:37.927 net/pcap: not in enabled drivers build config 00:02:37.927 net/pfe: not in enabled drivers build config 00:02:37.927 net/qede: not in enabled drivers build config 00:02:37.927 net/ring: not in enabled drivers build config 00:02:37.927 net/sfc: not in enabled drivers build config 00:02:37.927 net/softnic: not in enabled drivers build config 00:02:37.927 net/tap: not in enabled drivers build config 00:02:37.927 net/thunderx: not in enabled drivers build config 00:02:37.927 net/txgbe: not in enabled drivers build config 00:02:37.927 net/vdev_netvsc: not in enabled drivers build config 00:02:37.927 net/vhost: not in enabled drivers build config 00:02:37.927 net/virtio: not in enabled drivers build config 00:02:37.927 net/vmxnet3: not in enabled drivers build config 00:02:37.927 raw/*: missing internal dependency, "rawdev" 00:02:37.927 crypto/armv8: not in enabled drivers build config 00:02:37.927 crypto/bcmfs: not in enabled drivers build config 00:02:37.927 crypto/caam_jr: not in enabled drivers build config 00:02:37.927 crypto/ccp: not in enabled drivers build config 00:02:37.927 crypto/cnxk: not in enabled drivers build config 00:02:37.927 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.927 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.927 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.927 crypto/mlx5: not in enabled drivers build config 00:02:37.927 crypto/mvsam: not in enabled drivers build config 00:02:37.927 crypto/nitrox: not in enabled drivers build config 00:02:37.927 crypto/null: not in enabled drivers build config 00:02:37.927 crypto/octeontx: not in enabled drivers build config 00:02:37.927 crypto/openssl: not in enabled drivers build config 00:02:37.927 crypto/scheduler: not in enabled drivers build config 00:02:37.927 crypto/uadk: not in enabled drivers build config 00:02:37.927 crypto/virtio: not in enabled drivers build config 00:02:37.927 compress/isal: not in enabled drivers build config 00:02:37.927 compress/mlx5: not in enabled drivers build config 00:02:37.927 compress/nitrox: not in enabled drivers build config 00:02:37.927 compress/octeontx: not in enabled drivers build config 00:02:37.927 compress/zlib: not in enabled drivers build config 00:02:37.927 regex/*: missing internal dependency, "regexdev" 00:02:37.927 ml/*: missing internal dependency, "mldev" 00:02:37.927 vdpa/ifc: not in enabled drivers build config 00:02:37.927 vdpa/mlx5: not in enabled drivers build config 00:02:37.927 vdpa/nfp: not in enabled drivers build config 00:02:37.927 vdpa/sfc: not in enabled drivers build config 00:02:37.927 event/*: missing internal dependency, "eventdev" 00:02:37.927 baseband/*: missing internal dependency, "bbdev" 00:02:37.927 gpu/*: missing internal dependency, "gpudev" 00:02:37.927 00:02:37.927 00:02:38.187 Build targets in project: 85 00:02:38.187 00:02:38.187 DPDK 24.03.0 00:02:38.187 00:02:38.187 User defined options 00:02:38.187 buildtype : debug 00:02:38.187 default_library : shared 00:02:38.187 libdir : lib 00:02:38.187 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:38.187 b_sanitize : address 00:02:38.187 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:38.187 c_link_args : 00:02:38.187 cpu_instruction_set: native 00:02:38.187 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:38.187 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:38.187 enable_docs : false 00:02:38.187 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:38.187 enable_kmods : false 00:02:38.187 max_lcores : 128 00:02:38.187 tests : false 00:02:38.187 00:02:38.187 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.756 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:38.756 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.756 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.756 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.756 [4/268] Linking static target lib/librte_log.a 00:02:38.756 [5/268] Linking static target lib/librte_kvargs.a 00:02:38.756 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.016 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.016 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.275 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.275 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.275 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.275 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.275 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.275 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.275 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.275 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.535 [17/268] Linking static target lib/librte_telemetry.a 00:02:39.535 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.535 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.795 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.795 [21/268] Linking target lib/librte_log.so.24.1 00:02:39.795 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.795 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.795 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.795 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.795 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.795 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.795 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:40.055 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.056 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:40.056 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.056 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.056 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:40.315 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.315 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.315 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:40.315 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.315 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.315 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.315 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.575 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.575 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:40.575 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.575 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.575 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.575 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.575 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.575 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.836 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.096 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.096 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.096 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.096 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.096 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.096 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.096 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.096 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.358 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.358 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.358 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.358 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:41.358 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.619 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.619 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.619 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.619 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.879 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.879 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.879 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.879 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:41.879 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.879 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.139 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.139 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.139 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.139 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.139 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.139 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.139 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.139 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.398 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.398 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.398 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.398 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.398 [85/268] Linking static target lib/librte_ring.a 00:02:42.658 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.658 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.658 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.658 [89/268] Linking static target lib/librte_eal.a 00:02:42.658 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:42.658 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.658 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:42.658 [93/268] Linking static target lib/librte_mempool.a 00:02:42.658 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.917 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.917 [96/268] Linking static target lib/librte_rcu.a 00:02:42.917 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.917 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.176 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.177 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:43.177 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.177 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.436 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.436 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.436 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.436 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.436 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.436 [108/268] Linking static target lib/librte_net.a 00:02:43.436 [109/268] Linking static target lib/librte_mbuf.a 00:02:43.436 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.436 [111/268] Linking static target lib/librte_meter.a 00:02:43.701 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.701 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.701 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.994 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.994 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.994 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.994 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.275 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.275 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.275 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.275 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.534 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.534 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.534 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.534 [126/268] Linking static target lib/librte_pci.a 00:02:44.793 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.793 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.793 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.793 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.793 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.793 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.793 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.793 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.052 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.052 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.052 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.052 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.052 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.052 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.052 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.052 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.052 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.052 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.311 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.311 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.311 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.311 [148/268] Linking static target lib/librte_cmdline.a 00:02:45.311 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:45.570 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.570 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.830 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.830 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.830 [154/268] Linking static target lib/librte_timer.a 00:02:45.830 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.830 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.089 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.089 [158/268] Linking static target lib/librte_compressdev.a 00:02:46.089 [159/268] Linking static target lib/librte_ethdev.a 00:02:46.089 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.089 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.089 [162/268] Linking static target lib/librte_hash.a 00:02:46.089 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.348 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.348 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.349 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.349 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.349 [168/268] Linking static target lib/librte_dmadev.a 00:02:46.349 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.608 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.608 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.867 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.867 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.867 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.867 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.126 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.126 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.126 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:47.126 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.126 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.126 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.126 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.126 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.126 [184/268] Linking static target lib/librte_cryptodev.a 00:02:47.385 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.385 [186/268] Linking static target lib/librte_power.a 00:02:47.645 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.645 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.645 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.905 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.905 [191/268] Linking static target lib/librte_reorder.a 00:02:47.905 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.905 [193/268] Linking static target lib/librte_security.a 00:02:48.164 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:48.423 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.423 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.682 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:48.682 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.683 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.683 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.943 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.943 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.943 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.943 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.202 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.202 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.463 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.463 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.463 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.463 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.463 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.463 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:49.723 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.723 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.723 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.723 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:49.723 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.723 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.723 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:49.723 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.723 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.982 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.982 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.982 [224/268] Linking static target drivers/librte_mempool_ring.a 00:02:49.982 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.982 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.982 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.364 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.301 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.301 [230/268] Linking target lib/librte_eal.so.24.1 00:02:52.561 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:52.561 [232/268] Linking target lib/librte_meter.so.24.1 00:02:52.561 [233/268] Linking target lib/librte_timer.so.24.1 00:02:52.561 [234/268] Linking target lib/librte_pci.so.24.1 00:02:52.561 [235/268] Linking target lib/librte_ring.so.24.1 00:02:52.561 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:52.561 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:52.561 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:52.561 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:52.561 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:52.561 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:52.561 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:52.820 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:52.820 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:52.820 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:52.820 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:52.820 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:52.820 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:52.820 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.080 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.080 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:53.080 [252/268] Linking target lib/librte_net.so.24.1 00:02:53.080 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:53.080 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:53.080 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.080 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.080 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:53.080 [258/268] Linking target lib/librte_hash.so.24.1 00:02:53.339 [259/268] Linking target lib/librte_security.so.24.1 00:02:53.339 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:53.907 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.907 [262/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.907 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:54.167 [264/268] Linking static target lib/librte_vhost.a 00:02:54.167 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:54.167 [266/268] Linking target lib/librte_power.so.24.1 00:02:56.704 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.704 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:56.704 INFO: autodetecting backend as ninja 00:02:56.704 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:14.807 CC lib/log/log_flags.o 00:03:14.807 CC lib/log/log.o 00:03:14.807 CC lib/log/log_deprecated.o 00:03:14.807 CC lib/ut/ut.o 00:03:14.807 CC lib/ut_mock/mock.o 00:03:14.807 LIB libspdk_ut.a 00:03:14.807 LIB libspdk_ut_mock.a 00:03:14.807 SO libspdk_ut.so.2.0 00:03:14.807 LIB libspdk_log.a 00:03:14.807 SO libspdk_ut_mock.so.6.0 00:03:14.807 SYMLINK libspdk_ut.so 00:03:14.807 SO libspdk_log.so.7.0 00:03:14.807 SYMLINK libspdk_ut_mock.so 00:03:14.807 SYMLINK libspdk_log.so 00:03:14.807 CXX lib/trace_parser/trace.o 00:03:14.807 CC lib/dma/dma.o 00:03:14.807 CC lib/ioat/ioat.o 00:03:14.807 CC lib/util/base64.o 00:03:14.807 CC lib/util/bit_array.o 00:03:14.807 CC lib/util/crc16.o 00:03:14.807 CC lib/util/crc32.o 00:03:14.807 CC lib/util/cpuset.o 00:03:14.807 CC lib/util/crc32c.o 00:03:14.807 CC lib/vfio_user/host/vfio_user_pci.o 00:03:14.807 CC lib/util/crc32_ieee.o 00:03:14.807 CC lib/util/crc64.o 00:03:14.807 CC lib/vfio_user/host/vfio_user.o 00:03:14.807 LIB libspdk_dma.a 00:03:14.807 CC lib/util/dif.o 00:03:14.807 SO libspdk_dma.so.5.0 00:03:14.807 CC lib/util/fd.o 00:03:14.807 CC lib/util/fd_group.o 00:03:14.807 SYMLINK libspdk_dma.so 00:03:14.807 CC lib/util/file.o 00:03:14.807 CC lib/util/hexlify.o 00:03:14.807 CC lib/util/iov.o 00:03:14.807 LIB libspdk_ioat.a 00:03:14.807 SO libspdk_ioat.so.7.0 00:03:14.807 LIB libspdk_vfio_user.a 00:03:14.807 CC lib/util/math.o 00:03:14.807 SO libspdk_vfio_user.so.5.0 00:03:14.807 SYMLINK libspdk_ioat.so 00:03:15.066 CC lib/util/net.o 00:03:15.066 CC lib/util/pipe.o 00:03:15.066 CC lib/util/strerror_tls.o 00:03:15.066 CC lib/util/string.o 00:03:15.066 SYMLINK libspdk_vfio_user.so 00:03:15.066 CC lib/util/uuid.o 00:03:15.066 CC lib/util/xor.o 00:03:15.066 CC lib/util/zipf.o 00:03:15.066 CC lib/util/md5.o 00:03:15.326 LIB libspdk_util.a 00:03:15.585 SO libspdk_util.so.10.0 00:03:15.585 LIB libspdk_trace_parser.a 00:03:15.585 SO libspdk_trace_parser.so.6.0 00:03:15.585 SYMLINK libspdk_util.so 00:03:15.585 SYMLINK libspdk_trace_parser.so 00:03:15.844 CC lib/json/json_parse.o 00:03:15.844 CC lib/json/json_util.o 00:03:15.844 CC lib/idxd/idxd.o 00:03:15.844 CC lib/json/json_write.o 00:03:15.844 CC lib/conf/conf.o 00:03:15.844 CC lib/env_dpdk/env.o 00:03:15.844 CC lib/idxd/idxd_user.o 00:03:15.844 CC lib/vmd/vmd.o 00:03:15.844 CC lib/rdma_utils/rdma_utils.o 00:03:15.844 CC lib/rdma_provider/common.o 00:03:16.103 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:16.103 LIB libspdk_conf.a 00:03:16.103 CC lib/env_dpdk/memory.o 00:03:16.103 SO libspdk_conf.so.6.0 00:03:16.103 CC lib/env_dpdk/pci.o 00:03:16.103 CC lib/idxd/idxd_kernel.o 00:03:16.103 LIB libspdk_rdma_utils.a 00:03:16.103 LIB libspdk_json.a 00:03:16.103 SYMLINK libspdk_conf.so 00:03:16.103 CC lib/env_dpdk/init.o 00:03:16.103 SO libspdk_rdma_utils.so.1.0 00:03:16.103 SO libspdk_json.so.6.0 00:03:16.103 LIB libspdk_rdma_provider.a 00:03:16.103 SYMLINK libspdk_rdma_utils.so 00:03:16.103 CC lib/env_dpdk/threads.o 00:03:16.103 SO libspdk_rdma_provider.so.6.0 00:03:16.103 SYMLINK libspdk_json.so 00:03:16.103 CC lib/vmd/led.o 00:03:16.362 SYMLINK libspdk_rdma_provider.so 00:03:16.362 CC lib/env_dpdk/pci_ioat.o 00:03:16.362 CC lib/env_dpdk/pci_virtio.o 00:03:16.362 CC lib/jsonrpc/jsonrpc_server.o 00:03:16.362 CC lib/env_dpdk/pci_vmd.o 00:03:16.362 CC lib/env_dpdk/pci_idxd.o 00:03:16.362 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:16.362 CC lib/env_dpdk/pci_event.o 00:03:16.622 CC lib/env_dpdk/sigbus_handler.o 00:03:16.622 CC lib/env_dpdk/pci_dpdk.o 00:03:16.622 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.622 LIB libspdk_idxd.a 00:03:16.622 SO libspdk_idxd.so.12.1 00:03:16.622 LIB libspdk_vmd.a 00:03:16.622 CC lib/jsonrpc/jsonrpc_client.o 00:03:16.622 SO libspdk_vmd.so.6.0 00:03:16.622 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.622 SYMLINK libspdk_idxd.so 00:03:16.622 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.622 SYMLINK libspdk_vmd.so 00:03:16.880 LIB libspdk_jsonrpc.a 00:03:16.880 SO libspdk_jsonrpc.so.6.0 00:03:16.880 SYMLINK libspdk_jsonrpc.so 00:03:17.530 CC lib/rpc/rpc.o 00:03:17.530 LIB libspdk_env_dpdk.a 00:03:17.530 SO libspdk_env_dpdk.so.15.0 00:03:17.530 LIB libspdk_rpc.a 00:03:17.789 SO libspdk_rpc.so.6.0 00:03:17.789 SYMLINK libspdk_env_dpdk.so 00:03:17.789 SYMLINK libspdk_rpc.so 00:03:18.048 CC lib/notify/notify.o 00:03:18.048 CC lib/notify/notify_rpc.o 00:03:18.048 CC lib/trace/trace.o 00:03:18.048 CC lib/trace/trace_rpc.o 00:03:18.048 CC lib/trace/trace_flags.o 00:03:18.048 CC lib/keyring/keyring.o 00:03:18.048 CC lib/keyring/keyring_rpc.o 00:03:18.306 LIB libspdk_notify.a 00:03:18.306 SO libspdk_notify.so.6.0 00:03:18.306 LIB libspdk_keyring.a 00:03:18.306 SYMLINK libspdk_notify.so 00:03:18.306 SO libspdk_keyring.so.2.0 00:03:18.306 LIB libspdk_trace.a 00:03:18.565 SO libspdk_trace.so.11.0 00:03:18.565 SYMLINK libspdk_keyring.so 00:03:18.565 SYMLINK libspdk_trace.so 00:03:18.824 CC lib/sock/sock.o 00:03:18.824 CC lib/sock/sock_rpc.o 00:03:18.824 CC lib/thread/thread.o 00:03:18.824 CC lib/thread/iobuf.o 00:03:19.393 LIB libspdk_sock.a 00:03:19.393 SO libspdk_sock.so.10.0 00:03:19.393 SYMLINK libspdk_sock.so 00:03:19.961 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.961 CC lib/nvme/nvme_ctrlr.o 00:03:19.961 CC lib/nvme/nvme_fabric.o 00:03:19.961 CC lib/nvme/nvme_ns_cmd.o 00:03:19.961 CC lib/nvme/nvme_pcie_common.o 00:03:19.961 CC lib/nvme/nvme_ns.o 00:03:19.961 CC lib/nvme/nvme_pcie.o 00:03:19.961 CC lib/nvme/nvme.o 00:03:19.961 CC lib/nvme/nvme_qpair.o 00:03:20.530 LIB libspdk_thread.a 00:03:20.530 CC lib/nvme/nvme_quirks.o 00:03:20.530 CC lib/nvme/nvme_transport.o 00:03:20.530 SO libspdk_thread.so.10.1 00:03:20.530 CC lib/nvme/nvme_discovery.o 00:03:20.530 SYMLINK libspdk_thread.so 00:03:20.530 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.788 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.788 CC lib/nvme/nvme_tcp.o 00:03:20.788 CC lib/nvme/nvme_opal.o 00:03:20.788 CC lib/nvme/nvme_io_msg.o 00:03:21.054 CC lib/nvme/nvme_poll_group.o 00:03:21.054 CC lib/accel/accel.o 00:03:21.313 CC lib/accel/accel_rpc.o 00:03:21.313 CC lib/blob/blobstore.o 00:03:21.313 CC lib/init/json_config.o 00:03:21.313 CC lib/virtio/virtio.o 00:03:21.313 CC lib/init/subsystem.o 00:03:21.313 CC lib/accel/accel_sw.o 00:03:21.572 CC lib/fsdev/fsdev.o 00:03:21.572 CC lib/nvme/nvme_zns.o 00:03:21.572 CC lib/init/subsystem_rpc.o 00:03:21.572 CC lib/fsdev/fsdev_io.o 00:03:21.572 CC lib/init/rpc.o 00:03:21.572 CC lib/virtio/virtio_vhost_user.o 00:03:21.831 CC lib/fsdev/fsdev_rpc.o 00:03:21.831 LIB libspdk_init.a 00:03:21.831 CC lib/blob/request.o 00:03:21.831 SO libspdk_init.so.6.0 00:03:21.831 CC lib/blob/zeroes.o 00:03:21.831 SYMLINK libspdk_init.so 00:03:21.831 CC lib/blob/blob_bs_dev.o 00:03:22.089 CC lib/virtio/virtio_vfio_user.o 00:03:22.089 CC lib/virtio/virtio_pci.o 00:03:22.089 CC lib/nvme/nvme_stubs.o 00:03:22.089 LIB libspdk_fsdev.a 00:03:22.089 CC lib/nvme/nvme_auth.o 00:03:22.089 SO libspdk_fsdev.so.1.0 00:03:22.089 CC lib/nvme/nvme_cuse.o 00:03:22.346 SYMLINK libspdk_fsdev.so 00:03:22.346 LIB libspdk_accel.a 00:03:22.346 CC lib/nvme/nvme_rdma.o 00:03:22.346 SO libspdk_accel.so.16.0 00:03:22.346 LIB libspdk_virtio.a 00:03:22.346 SYMLINK libspdk_accel.so 00:03:22.346 SO libspdk_virtio.so.7.0 00:03:22.346 CC lib/event/app.o 00:03:22.346 CC lib/event/reactor.o 00:03:22.346 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:22.604 SYMLINK libspdk_virtio.so 00:03:22.604 CC lib/event/log_rpc.o 00:03:22.604 CC lib/event/app_rpc.o 00:03:22.604 CC lib/bdev/bdev.o 00:03:22.604 CC lib/event/scheduler_static.o 00:03:22.862 CC lib/bdev/bdev_rpc.o 00:03:22.862 CC lib/bdev/bdev_zone.o 00:03:22.862 CC lib/bdev/part.o 00:03:23.145 LIB libspdk_event.a 00:03:23.145 CC lib/bdev/scsi_nvme.o 00:03:23.145 SO libspdk_event.so.14.0 00:03:23.145 SYMLINK libspdk_event.so 00:03:23.145 LIB libspdk_fuse_dispatcher.a 00:03:23.145 SO libspdk_fuse_dispatcher.so.1.0 00:03:23.145 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.711 LIB libspdk_nvme.a 00:03:23.970 SO libspdk_nvme.so.14.0 00:03:24.229 SYMLINK libspdk_nvme.so 00:03:24.797 LIB libspdk_blob.a 00:03:25.057 SO libspdk_blob.so.11.0 00:03:25.057 SYMLINK libspdk_blob.so 00:03:25.316 LIB libspdk_bdev.a 00:03:25.317 SO libspdk_bdev.so.16.0 00:03:25.576 CC lib/blobfs/blobfs.o 00:03:25.576 CC lib/blobfs/tree.o 00:03:25.576 CC lib/lvol/lvol.o 00:03:25.576 SYMLINK libspdk_bdev.so 00:03:25.835 CC lib/ublk/ublk.o 00:03:25.836 CC lib/ublk/ublk_rpc.o 00:03:25.836 CC lib/scsi/dev.o 00:03:25.836 CC lib/scsi/port.o 00:03:25.836 CC lib/scsi/lun.o 00:03:25.836 CC lib/nbd/nbd.o 00:03:25.836 CC lib/nvmf/ctrlr.o 00:03:25.836 CC lib/ftl/ftl_core.o 00:03:25.836 CC lib/scsi/scsi.o 00:03:25.836 CC lib/scsi/scsi_bdev.o 00:03:26.096 CC lib/nbd/nbd_rpc.o 00:03:26.096 CC lib/scsi/scsi_pr.o 00:03:26.096 CC lib/ftl/ftl_init.o 00:03:26.096 CC lib/ftl/ftl_layout.o 00:03:26.096 CC lib/scsi/scsi_rpc.o 00:03:26.096 LIB libspdk_nbd.a 00:03:26.356 SO libspdk_nbd.so.7.0 00:03:26.356 CC lib/ftl/ftl_debug.o 00:03:26.356 CC lib/scsi/task.o 00:03:26.356 SYMLINK libspdk_nbd.so 00:03:26.356 CC lib/nvmf/ctrlr_discovery.o 00:03:26.356 LIB libspdk_blobfs.a 00:03:26.356 CC lib/nvmf/ctrlr_bdev.o 00:03:26.356 LIB libspdk_ublk.a 00:03:26.356 SO libspdk_blobfs.so.10.0 00:03:26.356 SO libspdk_ublk.so.3.0 00:03:26.356 CC lib/ftl/ftl_io.o 00:03:26.356 CC lib/ftl/ftl_sb.o 00:03:26.356 SYMLINK libspdk_blobfs.so 00:03:26.356 CC lib/ftl/ftl_l2p.o 00:03:26.356 SYMLINK libspdk_ublk.so 00:03:26.356 LIB libspdk_scsi.a 00:03:26.356 CC lib/nvmf/subsystem.o 00:03:26.616 CC lib/nvmf/nvmf.o 00:03:26.616 LIB libspdk_lvol.a 00:03:26.616 SO libspdk_scsi.so.9.0 00:03:26.616 SO libspdk_lvol.so.10.0 00:03:26.616 SYMLINK libspdk_scsi.so 00:03:26.616 CC lib/nvmf/nvmf_rpc.o 00:03:26.616 SYMLINK libspdk_lvol.so 00:03:26.617 CC lib/ftl/ftl_l2p_flat.o 00:03:26.617 CC lib/ftl/ftl_nv_cache.o 00:03:26.617 CC lib/ftl/ftl_band.o 00:03:26.617 CC lib/nvmf/transport.o 00:03:26.877 CC lib/nvmf/tcp.o 00:03:26.877 CC lib/nvmf/stubs.o 00:03:27.137 CC lib/ftl/ftl_band_ops.o 00:03:27.137 CC lib/nvmf/mdns_server.o 00:03:27.397 CC lib/ftl/ftl_writer.o 00:03:27.397 CC lib/ftl/ftl_rq.o 00:03:27.397 CC lib/ftl/ftl_reloc.o 00:03:27.397 CC lib/ftl/ftl_l2p_cache.o 00:03:27.657 CC lib/nvmf/rdma.o 00:03:27.657 CC lib/ftl/ftl_p2l.o 00:03:27.657 CC lib/ftl/ftl_p2l_log.o 00:03:27.657 CC lib/nvmf/auth.o 00:03:27.657 CC lib/ftl/mngt/ftl_mngt.o 00:03:27.657 CC lib/iscsi/conn.o 00:03:27.916 CC lib/iscsi/init_grp.o 00:03:27.916 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:27.916 CC lib/vhost/vhost.o 00:03:27.916 CC lib/iscsi/iscsi.o 00:03:27.916 CC lib/vhost/vhost_rpc.o 00:03:27.916 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.174 CC lib/iscsi/param.o 00:03:28.174 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.174 CC lib/iscsi/portal_grp.o 00:03:28.174 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:28.434 CC lib/iscsi/tgt_node.o 00:03:28.434 CC lib/iscsi/iscsi_subsystem.o 00:03:28.434 CC lib/vhost/vhost_scsi.o 00:03:28.434 CC lib/vhost/vhost_blk.o 00:03:28.434 CC lib/vhost/rte_vhost_user.o 00:03:28.434 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:28.434 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.693 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.694 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.694 CC lib/iscsi/iscsi_rpc.o 00:03:28.694 CC lib/iscsi/task.o 00:03:28.694 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.953 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.953 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.953 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.211 CC lib/ftl/utils/ftl_conf.o 00:03:29.211 CC lib/ftl/utils/ftl_md.o 00:03:29.211 CC lib/ftl/utils/ftl_mempool.o 00:03:29.211 CC lib/ftl/utils/ftl_bitmap.o 00:03:29.211 CC lib/ftl/utils/ftl_property.o 00:03:29.211 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:29.470 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:29.470 LIB libspdk_iscsi.a 00:03:29.470 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:29.470 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:29.470 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:29.470 SO libspdk_iscsi.so.8.0 00:03:29.470 LIB libspdk_vhost.a 00:03:29.470 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:29.470 SO libspdk_vhost.so.8.0 00:03:29.470 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:29.470 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:29.470 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:29.470 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:29.470 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.730 SYMLINK libspdk_iscsi.so 00:03:29.730 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:29.730 SYMLINK libspdk_vhost.so 00:03:29.730 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:29.730 CC lib/ftl/base/ftl_base_dev.o 00:03:29.730 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.730 CC lib/ftl/ftl_trace.o 00:03:29.990 LIB libspdk_nvmf.a 00:03:29.990 LIB libspdk_ftl.a 00:03:29.990 SO libspdk_nvmf.so.19.0 00:03:30.250 SO libspdk_ftl.so.9.0 00:03:30.250 SYMLINK libspdk_nvmf.so 00:03:30.510 SYMLINK libspdk_ftl.so 00:03:30.770 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.030 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.030 CC module/accel/dsa/accel_dsa.o 00:03:31.030 CC module/accel/iaa/accel_iaa.o 00:03:31.030 CC module/accel/error/accel_error.o 00:03:31.030 CC module/keyring/file/keyring.o 00:03:31.030 CC module/accel/ioat/accel_ioat.o 00:03:31.030 CC module/blob/bdev/blob_bdev.o 00:03:31.030 CC module/fsdev/aio/fsdev_aio.o 00:03:31.030 CC module/sock/posix/posix.o 00:03:31.030 LIB libspdk_env_dpdk_rpc.a 00:03:31.030 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.030 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.030 CC module/accel/error/accel_error_rpc.o 00:03:31.030 CC module/keyring/file/keyring_rpc.o 00:03:31.030 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.030 LIB libspdk_scheduler_dynamic.a 00:03:31.030 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.030 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.290 LIB libspdk_accel_error.a 00:03:31.290 LIB libspdk_keyring_file.a 00:03:31.290 LIB libspdk_blob_bdev.a 00:03:31.290 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.290 SO libspdk_accel_error.so.2.0 00:03:31.290 SO libspdk_keyring_file.so.2.0 00:03:31.290 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.290 SO libspdk_blob_bdev.so.11.0 00:03:31.290 LIB libspdk_accel_ioat.a 00:03:31.290 LIB libspdk_accel_iaa.a 00:03:31.290 CC module/keyring/linux/keyring.o 00:03:31.290 SO libspdk_accel_ioat.so.6.0 00:03:31.290 SYMLINK libspdk_accel_error.so 00:03:31.290 SYMLINK libspdk_keyring_file.so 00:03:31.290 SO libspdk_accel_iaa.so.3.0 00:03:31.290 SYMLINK libspdk_blob_bdev.so 00:03:31.290 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:31.290 CC module/keyring/linux/keyring_rpc.o 00:03:31.290 SYMLINK libspdk_accel_ioat.so 00:03:31.290 CC module/fsdev/aio/linux_aio_mgr.o 00:03:31.290 LIB libspdk_accel_dsa.a 00:03:31.290 SYMLINK libspdk_accel_iaa.so 00:03:31.290 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.290 SO libspdk_accel_dsa.so.5.0 00:03:31.549 LIB libspdk_keyring_linux.a 00:03:31.549 SYMLINK libspdk_accel_dsa.so 00:03:31.549 SO libspdk_keyring_linux.so.1.0 00:03:31.549 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.549 SYMLINK libspdk_keyring_linux.so 00:03:31.549 CC module/bdev/delay/vbdev_delay.o 00:03:31.549 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.549 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.549 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.549 CC module/bdev/error/vbdev_error.o 00:03:31.549 CC module/bdev/gpt/gpt.o 00:03:31.549 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.549 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.549 LIB libspdk_fsdev_aio.a 00:03:31.549 SO libspdk_fsdev_aio.so.1.0 00:03:31.549 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.813 CC module/bdev/malloc/bdev_malloc.o 00:03:31.813 LIB libspdk_scheduler_gscheduler.a 00:03:31.813 LIB libspdk_sock_posix.a 00:03:31.813 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.813 SYMLINK libspdk_fsdev_aio.so 00:03:31.813 SO libspdk_sock_posix.so.6.0 00:03:31.813 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.813 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.813 LIB libspdk_blobfs_bdev.a 00:03:31.813 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.813 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.813 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.813 SO libspdk_blobfs_bdev.so.6.0 00:03:31.813 SYMLINK libspdk_sock_posix.so 00:03:31.813 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.813 SYMLINK libspdk_blobfs_bdev.so 00:03:32.076 LIB libspdk_bdev_delay.a 00:03:32.076 LIB libspdk_bdev_error.a 00:03:32.076 SO libspdk_bdev_error.so.6.0 00:03:32.076 SO libspdk_bdev_delay.so.6.0 00:03:32.076 LIB libspdk_bdev_gpt.a 00:03:32.076 SYMLINK libspdk_bdev_error.so 00:03:32.076 CC module/bdev/null/bdev_null.o 00:03:32.076 SYMLINK libspdk_bdev_delay.so 00:03:32.076 CC module/bdev/nvme/bdev_nvme.o 00:03:32.076 LIB libspdk_bdev_malloc.a 00:03:32.076 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.076 SO libspdk_bdev_gpt.so.6.0 00:03:32.076 CC module/bdev/raid/bdev_raid.o 00:03:32.076 SO libspdk_bdev_malloc.so.6.0 00:03:32.076 SYMLINK libspdk_bdev_gpt.so 00:03:32.076 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.076 SYMLINK libspdk_bdev_malloc.so 00:03:32.076 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.335 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.335 CC module/bdev/split/vbdev_split.o 00:03:32.335 LIB libspdk_bdev_lvol.a 00:03:32.335 SO libspdk_bdev_lvol.so.6.0 00:03:32.335 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.335 CC module/bdev/aio/bdev_aio.o 00:03:32.335 CC module/bdev/null/bdev_null_rpc.o 00:03:32.335 SYMLINK libspdk_bdev_lvol.so 00:03:32.335 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.335 CC module/bdev/nvme/nvme_rpc.o 00:03:32.335 LIB libspdk_bdev_passthru.a 00:03:32.335 SO libspdk_bdev_passthru.so.6.0 00:03:32.335 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.595 LIB libspdk_bdev_split.a 00:03:32.595 SYMLINK libspdk_bdev_passthru.so 00:03:32.595 CC module/bdev/nvme/vbdev_opal.o 00:03:32.595 LIB libspdk_bdev_null.a 00:03:32.595 SO libspdk_bdev_split.so.6.0 00:03:32.595 SO libspdk_bdev_null.so.6.0 00:03:32.595 LIB libspdk_bdev_zone_block.a 00:03:32.595 SO libspdk_bdev_zone_block.so.6.0 00:03:32.595 SYMLINK libspdk_bdev_split.so 00:03:32.595 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.595 SYMLINK libspdk_bdev_null.so 00:03:32.595 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.595 SYMLINK libspdk_bdev_zone_block.so 00:03:32.595 CC module/bdev/raid/raid0.o 00:03:32.595 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.855 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.855 CC module/bdev/raid/raid1.o 00:03:32.855 CC module/bdev/ftl/bdev_ftl.o 00:03:32.855 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.855 LIB libspdk_bdev_aio.a 00:03:32.855 SO libspdk_bdev_aio.so.6.0 00:03:32.855 CC module/bdev/raid/concat.o 00:03:32.855 SYMLINK libspdk_bdev_aio.so 00:03:32.855 CC module/bdev/raid/raid5f.o 00:03:32.855 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:33.116 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.116 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.116 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.116 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.116 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.116 LIB libspdk_bdev_ftl.a 00:03:33.116 SO libspdk_bdev_ftl.so.6.0 00:03:33.116 LIB libspdk_bdev_iscsi.a 00:03:33.116 SYMLINK libspdk_bdev_ftl.so 00:03:33.376 SO libspdk_bdev_iscsi.so.6.0 00:03:33.376 SYMLINK libspdk_bdev_iscsi.so 00:03:33.376 LIB libspdk_bdev_raid.a 00:03:33.636 LIB libspdk_bdev_virtio.a 00:03:33.636 SO libspdk_bdev_raid.so.6.0 00:03:33.636 SO libspdk_bdev_virtio.so.6.0 00:03:33.636 SYMLINK libspdk_bdev_raid.so 00:03:33.636 SYMLINK libspdk_bdev_virtio.so 00:03:34.575 LIB libspdk_bdev_nvme.a 00:03:34.575 SO libspdk_bdev_nvme.so.7.0 00:03:34.575 SYMLINK libspdk_bdev_nvme.so 00:03:35.144 CC module/event/subsystems/sock/sock.o 00:03:35.144 CC module/event/subsystems/scheduler/scheduler.o 00:03:35.144 CC module/event/subsystems/vmd/vmd.o 00:03:35.144 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.144 CC module/event/subsystems/fsdev/fsdev.o 00:03:35.144 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:35.144 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.144 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:35.144 CC module/event/subsystems/keyring/keyring.o 00:03:35.403 LIB libspdk_event_sock.a 00:03:35.403 LIB libspdk_event_scheduler.a 00:03:35.403 LIB libspdk_event_keyring.a 00:03:35.403 LIB libspdk_event_vhost_blk.a 00:03:35.403 LIB libspdk_event_vmd.a 00:03:35.403 LIB libspdk_event_fsdev.a 00:03:35.403 SO libspdk_event_sock.so.5.0 00:03:35.403 SO libspdk_event_keyring.so.1.0 00:03:35.403 SO libspdk_event_scheduler.so.4.0 00:03:35.403 SO libspdk_event_vhost_blk.so.3.0 00:03:35.403 LIB libspdk_event_iobuf.a 00:03:35.403 SO libspdk_event_vmd.so.6.0 00:03:35.403 SO libspdk_event_fsdev.so.1.0 00:03:35.403 SO libspdk_event_iobuf.so.3.0 00:03:35.403 SYMLINK libspdk_event_sock.so 00:03:35.403 SYMLINK libspdk_event_keyring.so 00:03:35.403 SYMLINK libspdk_event_scheduler.so 00:03:35.403 SYMLINK libspdk_event_vhost_blk.so 00:03:35.403 SYMLINK libspdk_event_fsdev.so 00:03:35.403 SYMLINK libspdk_event_vmd.so 00:03:35.403 SYMLINK libspdk_event_iobuf.so 00:03:35.972 CC module/event/subsystems/accel/accel.o 00:03:35.972 LIB libspdk_event_accel.a 00:03:35.972 SO libspdk_event_accel.so.6.0 00:03:35.972 SYMLINK libspdk_event_accel.so 00:03:36.541 CC module/event/subsystems/bdev/bdev.o 00:03:36.541 LIB libspdk_event_bdev.a 00:03:36.801 SO libspdk_event_bdev.so.6.0 00:03:36.801 SYMLINK libspdk_event_bdev.so 00:03:37.060 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.060 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.060 CC module/event/subsystems/scsi/scsi.o 00:03:37.060 CC module/event/subsystems/ublk/ublk.o 00:03:37.060 CC module/event/subsystems/nbd/nbd.o 00:03:37.319 LIB libspdk_event_scsi.a 00:03:37.319 LIB libspdk_event_nbd.a 00:03:37.319 SO libspdk_event_scsi.so.6.0 00:03:37.319 LIB libspdk_event_ublk.a 00:03:37.319 SO libspdk_event_nbd.so.6.0 00:03:37.319 SO libspdk_event_ublk.so.3.0 00:03:37.319 LIB libspdk_event_nvmf.a 00:03:37.319 SYMLINK libspdk_event_scsi.so 00:03:37.319 SYMLINK libspdk_event_nbd.so 00:03:37.319 SO libspdk_event_nvmf.so.6.0 00:03:37.319 SYMLINK libspdk_event_ublk.so 00:03:37.319 SYMLINK libspdk_event_nvmf.so 00:03:37.579 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.579 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.840 LIB libspdk_event_vhost_scsi.a 00:03:37.840 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.840 LIB libspdk_event_iscsi.a 00:03:37.840 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.840 SO libspdk_event_iscsi.so.6.0 00:03:38.100 SYMLINK libspdk_event_iscsi.so 00:03:38.100 SO libspdk.so.6.0 00:03:38.100 SYMLINK libspdk.so 00:03:38.359 CXX app/trace/trace.o 00:03:38.359 CC app/trace_record/trace_record.o 00:03:38.619 CC app/nvmf_tgt/nvmf_main.o 00:03:38.619 CC app/iscsi_tgt/iscsi_tgt.o 00:03:38.619 CC app/spdk_tgt/spdk_tgt.o 00:03:38.619 CC examples/ioat/perf/perf.o 00:03:38.619 CC test/thread/poller_perf/poller_perf.o 00:03:38.620 CC examples/util/zipf/zipf.o 00:03:38.620 CC test/app/bdev_svc/bdev_svc.o 00:03:38.620 CC test/dma/test_dma/test_dma.o 00:03:38.620 LINK nvmf_tgt 00:03:38.620 LINK poller_perf 00:03:38.620 LINK iscsi_tgt 00:03:38.620 LINK zipf 00:03:38.620 LINK spdk_tgt 00:03:38.620 LINK spdk_trace_record 00:03:38.880 LINK bdev_svc 00:03:38.880 LINK ioat_perf 00:03:38.880 LINK spdk_trace 00:03:38.880 TEST_HEADER include/spdk/accel.h 00:03:38.880 TEST_HEADER include/spdk/accel_module.h 00:03:38.880 TEST_HEADER include/spdk/assert.h 00:03:38.880 TEST_HEADER include/spdk/barrier.h 00:03:38.880 TEST_HEADER include/spdk/base64.h 00:03:38.880 TEST_HEADER include/spdk/bdev.h 00:03:38.880 TEST_HEADER include/spdk/bdev_module.h 00:03:38.880 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.880 TEST_HEADER include/spdk/bit_array.h 00:03:38.880 TEST_HEADER include/spdk/bit_pool.h 00:03:38.880 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.880 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.880 TEST_HEADER include/spdk/blobfs.h 00:03:38.880 TEST_HEADER include/spdk/blob.h 00:03:38.880 TEST_HEADER include/spdk/conf.h 00:03:38.880 TEST_HEADER include/spdk/config.h 00:03:38.880 CC app/spdk_lspci/spdk_lspci.o 00:03:38.880 TEST_HEADER include/spdk/cpuset.h 00:03:38.880 TEST_HEADER include/spdk/crc16.h 00:03:38.880 TEST_HEADER include/spdk/crc32.h 00:03:38.880 TEST_HEADER include/spdk/crc64.h 00:03:38.880 TEST_HEADER include/spdk/dif.h 00:03:38.880 TEST_HEADER include/spdk/dma.h 00:03:38.880 TEST_HEADER include/spdk/endian.h 00:03:38.880 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.880 TEST_HEADER include/spdk/env.h 00:03:38.880 TEST_HEADER include/spdk/event.h 00:03:38.880 TEST_HEADER include/spdk/fd_group.h 00:03:38.880 TEST_HEADER include/spdk/fd.h 00:03:38.880 TEST_HEADER include/spdk/file.h 00:03:38.880 TEST_HEADER include/spdk/fsdev.h 00:03:38.880 TEST_HEADER include/spdk/fsdev_module.h 00:03:38.880 TEST_HEADER include/spdk/ftl.h 00:03:38.880 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:38.880 CC app/spdk_nvme_perf/perf.o 00:03:38.880 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.880 TEST_HEADER include/spdk/hexlify.h 00:03:38.880 TEST_HEADER include/spdk/histogram_data.h 00:03:38.880 TEST_HEADER include/spdk/idxd.h 00:03:38.880 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.880 TEST_HEADER include/spdk/init.h 00:03:38.880 TEST_HEADER include/spdk/ioat.h 00:03:38.880 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.880 CC examples/ioat/verify/verify.o 00:03:38.880 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.880 TEST_HEADER include/spdk/json.h 00:03:38.880 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.880 TEST_HEADER include/spdk/keyring.h 00:03:38.880 TEST_HEADER include/spdk/keyring_module.h 00:03:38.880 TEST_HEADER include/spdk/likely.h 00:03:38.880 TEST_HEADER include/spdk/log.h 00:03:39.140 TEST_HEADER include/spdk/lvol.h 00:03:39.140 TEST_HEADER include/spdk/md5.h 00:03:39.140 TEST_HEADER include/spdk/memory.h 00:03:39.140 TEST_HEADER include/spdk/mmio.h 00:03:39.140 TEST_HEADER include/spdk/nbd.h 00:03:39.140 TEST_HEADER include/spdk/net.h 00:03:39.140 TEST_HEADER include/spdk/notify.h 00:03:39.140 TEST_HEADER include/spdk/nvme.h 00:03:39.140 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.140 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.140 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.140 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.140 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.140 CC test/app/histogram_perf/histogram_perf.o 00:03:39.140 CC test/app/jsoncat/jsoncat.o 00:03:39.140 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.140 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.140 TEST_HEADER include/spdk/nvmf.h 00:03:39.140 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.140 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.140 TEST_HEADER include/spdk/opal.h 00:03:39.140 TEST_HEADER include/spdk/opal_spec.h 00:03:39.140 TEST_HEADER include/spdk/pci_ids.h 00:03:39.140 LINK test_dma 00:03:39.140 TEST_HEADER include/spdk/pipe.h 00:03:39.140 TEST_HEADER include/spdk/queue.h 00:03:39.140 TEST_HEADER include/spdk/reduce.h 00:03:39.140 TEST_HEADER include/spdk/rpc.h 00:03:39.140 TEST_HEADER include/spdk/scheduler.h 00:03:39.140 TEST_HEADER include/spdk/scsi.h 00:03:39.140 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.140 TEST_HEADER include/spdk/sock.h 00:03:39.140 TEST_HEADER include/spdk/stdinc.h 00:03:39.140 TEST_HEADER include/spdk/string.h 00:03:39.140 TEST_HEADER include/spdk/thread.h 00:03:39.140 TEST_HEADER include/spdk/trace.h 00:03:39.140 TEST_HEADER include/spdk/trace_parser.h 00:03:39.140 TEST_HEADER include/spdk/tree.h 00:03:39.140 LINK spdk_lspci 00:03:39.140 TEST_HEADER include/spdk/ublk.h 00:03:39.140 CC app/spdk_nvme_identify/identify.o 00:03:39.140 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.140 TEST_HEADER include/spdk/util.h 00:03:39.140 TEST_HEADER include/spdk/uuid.h 00:03:39.140 TEST_HEADER include/spdk/version.h 00:03:39.140 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.140 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.140 TEST_HEADER include/spdk/vhost.h 00:03:39.140 TEST_HEADER include/spdk/vmd.h 00:03:39.140 TEST_HEADER include/spdk/xor.h 00:03:39.140 TEST_HEADER include/spdk/zipf.h 00:03:39.140 CXX test/cpp_headers/accel.o 00:03:39.140 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.140 LINK jsoncat 00:03:39.140 LINK histogram_perf 00:03:39.140 LINK verify 00:03:39.140 CXX test/cpp_headers/accel_module.o 00:03:39.140 CXX test/cpp_headers/assert.o 00:03:39.400 CXX test/cpp_headers/barrier.o 00:03:39.400 CC test/env/vtophys/vtophys.o 00:03:39.400 CXX test/cpp_headers/base64.o 00:03:39.400 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.400 LINK vtophys 00:03:39.400 CC test/app/stub/stub.o 00:03:39.400 CXX test/cpp_headers/bdev.o 00:03:39.400 LINK nvme_fuzz 00:03:39.400 CC test/event/event_perf/event_perf.o 00:03:39.659 LINK interrupt_tgt 00:03:39.659 CC test/nvme/aer/aer.o 00:03:39.659 CC test/nvme/reset/reset.o 00:03:39.659 LINK stub 00:03:39.659 CXX test/cpp_headers/bdev_module.o 00:03:39.659 LINK mem_callbacks 00:03:39.659 LINK event_perf 00:03:39.659 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.917 CXX test/cpp_headers/bdev_zone.o 00:03:39.917 LINK spdk_nvme_perf 00:03:39.917 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.917 CC test/env/memory/memory_ut.o 00:03:39.917 LINK reset 00:03:39.917 LINK aer 00:03:39.917 CC test/event/reactor/reactor.o 00:03:39.917 CC examples/thread/thread/thread_ex.o 00:03:39.917 LINK env_dpdk_post_init 00:03:39.917 CXX test/cpp_headers/bit_array.o 00:03:39.917 LINK spdk_nvme_identify 00:03:39.917 CXX test/cpp_headers/bit_pool.o 00:03:39.917 LINK reactor 00:03:40.177 CC test/event/reactor_perf/reactor_perf.o 00:03:40.177 CC test/nvme/sgl/sgl.o 00:03:40.177 LINK thread 00:03:40.177 CXX test/cpp_headers/blob_bdev.o 00:03:40.177 CC test/nvme/e2edp/nvme_dp.o 00:03:40.177 CC test/nvme/overhead/overhead.o 00:03:40.177 LINK reactor_perf 00:03:40.177 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.177 CC test/nvme/err_injection/err_injection.o 00:03:40.436 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.436 LINK sgl 00:03:40.436 LINK spdk_nvme_discover 00:03:40.436 LINK err_injection 00:03:40.436 LINK nvme_dp 00:03:40.436 CC test/event/app_repeat/app_repeat.o 00:03:40.436 CXX test/cpp_headers/blobfs.o 00:03:40.436 LINK overhead 00:03:40.436 CC examples/sock/hello_world/hello_sock.o 00:03:40.696 LINK app_repeat 00:03:40.696 CXX test/cpp_headers/blob.o 00:03:40.696 CC test/rpc_client/rpc_client_test.o 00:03:40.696 CC test/event/scheduler/scheduler.o 00:03:40.696 CC app/spdk_top/spdk_top.o 00:03:40.696 CC test/nvme/startup/startup.o 00:03:40.696 LINK hello_sock 00:03:40.696 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.696 CXX test/cpp_headers/conf.o 00:03:40.696 CC test/nvme/reserve/reserve.o 00:03:40.955 LINK rpc_client_test 00:03:40.955 LINK memory_ut 00:03:40.955 LINK startup 00:03:40.955 LINK lsvmd 00:03:40.955 LINK scheduler 00:03:40.955 CXX test/cpp_headers/config.o 00:03:40.955 CXX test/cpp_headers/cpuset.o 00:03:40.955 CC test/nvme/simple_copy/simple_copy.o 00:03:40.955 LINK reserve 00:03:41.214 CXX test/cpp_headers/crc16.o 00:03:41.214 CC test/env/pci/pci_ut.o 00:03:41.214 CC examples/vmd/led/led.o 00:03:41.214 CC test/nvme/connect_stress/connect_stress.o 00:03:41.214 CC test/accel/dif/dif.o 00:03:41.214 CC test/blobfs/mkfs/mkfs.o 00:03:41.214 LINK simple_copy 00:03:41.214 CXX test/cpp_headers/crc32.o 00:03:41.214 LINK led 00:03:41.473 LINK connect_stress 00:03:41.473 LINK mkfs 00:03:41.473 CXX test/cpp_headers/crc64.o 00:03:41.473 CC test/lvol/esnap/esnap.o 00:03:41.473 LINK iscsi_fuzz 00:03:41.473 CC test/nvme/boot_partition/boot_partition.o 00:03:41.473 LINK pci_ut 00:03:41.473 CXX test/cpp_headers/dif.o 00:03:41.473 CC examples/idxd/perf/perf.o 00:03:41.473 CXX test/cpp_headers/dma.o 00:03:41.473 LINK spdk_top 00:03:41.473 CXX test/cpp_headers/endian.o 00:03:41.733 LINK boot_partition 00:03:41.733 CXX test/cpp_headers/env_dpdk.o 00:03:41.733 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.733 CC test/nvme/compliance/nvme_compliance.o 00:03:41.733 CC app/spdk_dd/spdk_dd.o 00:03:41.733 CC app/vhost/vhost.o 00:03:41.733 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.733 CXX test/cpp_headers/env.o 00:03:41.733 CC app/fio/nvme/fio_plugin.o 00:03:41.992 LINK idxd_perf 00:03:41.992 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.992 LINK dif 00:03:41.992 CXX test/cpp_headers/event.o 00:03:41.992 LINK vhost 00:03:41.992 LINK fused_ordering 00:03:41.992 LINK nvme_compliance 00:03:42.252 CXX test/cpp_headers/fd_group.o 00:03:42.252 LINK spdk_dd 00:03:42.252 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:42.252 CC app/fio/bdev/fio_plugin.o 00:03:42.252 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.252 CXX test/cpp_headers/fd.o 00:03:42.252 LINK vhost_fuzz 00:03:42.252 CC test/nvme/fdp/fdp.o 00:03:42.511 CC test/bdev/bdevio/bdevio.o 00:03:42.511 CXX test/cpp_headers/file.o 00:03:42.511 LINK hello_fsdev 00:03:42.511 CC test/nvme/cuse/cuse.o 00:03:42.511 LINK spdk_nvme 00:03:42.511 LINK doorbell_aers 00:03:42.511 CXX test/cpp_headers/fsdev.o 00:03:42.511 CXX test/cpp_headers/fsdev_module.o 00:03:42.511 CXX test/cpp_headers/ftl.o 00:03:42.511 LINK fdp 00:03:42.770 CC examples/accel/perf/accel_perf.o 00:03:42.770 LINK spdk_bdev 00:03:42.770 CC examples/blob/hello_world/hello_blob.o 00:03:42.770 LINK bdevio 00:03:42.770 CC examples/nvme/hello_world/hello_world.o 00:03:42.770 CC examples/nvme/reconnect/reconnect.o 00:03:42.770 CXX test/cpp_headers/fuse_dispatcher.o 00:03:42.770 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:42.770 CC examples/nvme/arbitration/arbitration.o 00:03:43.029 CXX test/cpp_headers/gpt_spec.o 00:03:43.029 LINK hello_blob 00:03:43.029 LINK hello_world 00:03:43.029 CXX test/cpp_headers/hexlify.o 00:03:43.029 CC examples/blob/cli/blobcli.o 00:03:43.029 LINK reconnect 00:03:43.289 LINK arbitration 00:03:43.289 CXX test/cpp_headers/histogram_data.o 00:03:43.289 LINK accel_perf 00:03:43.289 CC examples/nvme/hotplug/hotplug.o 00:03:43.289 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:43.289 CXX test/cpp_headers/idxd.o 00:03:43.289 CC examples/nvme/abort/abort.o 00:03:43.289 CXX test/cpp_headers/idxd_spec.o 00:03:43.289 LINK nvme_manage 00:03:43.289 LINK cmb_copy 00:03:43.548 LINK hotplug 00:03:43.548 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:43.548 LINK blobcli 00:03:43.548 CXX test/cpp_headers/init.o 00:03:43.548 CXX test/cpp_headers/ioat.o 00:03:43.548 CXX test/cpp_headers/ioat_spec.o 00:03:43.548 LINK pmr_persistence 00:03:43.548 CC examples/bdev/hello_world/hello_bdev.o 00:03:43.548 CXX test/cpp_headers/iscsi_spec.o 00:03:43.548 LINK cuse 00:03:43.548 CXX test/cpp_headers/json.o 00:03:43.548 CC examples/bdev/bdevperf/bdevperf.o 00:03:43.807 LINK abort 00:03:43.807 CXX test/cpp_headers/jsonrpc.o 00:03:43.807 CXX test/cpp_headers/keyring.o 00:03:43.807 CXX test/cpp_headers/keyring_module.o 00:03:43.807 CXX test/cpp_headers/likely.o 00:03:43.807 CXX test/cpp_headers/log.o 00:03:43.807 CXX test/cpp_headers/lvol.o 00:03:43.807 LINK hello_bdev 00:03:43.807 CXX test/cpp_headers/md5.o 00:03:43.807 CXX test/cpp_headers/memory.o 00:03:43.807 CXX test/cpp_headers/mmio.o 00:03:43.807 CXX test/cpp_headers/nbd.o 00:03:43.807 CXX test/cpp_headers/net.o 00:03:43.807 CXX test/cpp_headers/notify.o 00:03:43.807 CXX test/cpp_headers/nvme.o 00:03:44.067 CXX test/cpp_headers/nvme_intel.o 00:03:44.067 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.067 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.067 CXX test/cpp_headers/nvme_spec.o 00:03:44.067 CXX test/cpp_headers/nvme_zns.o 00:03:44.067 CXX test/cpp_headers/nvmf_cmd.o 00:03:44.067 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.067 CXX test/cpp_headers/nvmf.o 00:03:44.067 CXX test/cpp_headers/nvmf_spec.o 00:03:44.067 CXX test/cpp_headers/nvmf_transport.o 00:03:44.067 CXX test/cpp_headers/opal.o 00:03:44.067 CXX test/cpp_headers/opal_spec.o 00:03:44.326 CXX test/cpp_headers/pci_ids.o 00:03:44.326 CXX test/cpp_headers/pipe.o 00:03:44.326 CXX test/cpp_headers/queue.o 00:03:44.326 CXX test/cpp_headers/reduce.o 00:03:44.326 CXX test/cpp_headers/rpc.o 00:03:44.326 CXX test/cpp_headers/scheduler.o 00:03:44.326 CXX test/cpp_headers/scsi.o 00:03:44.326 CXX test/cpp_headers/scsi_spec.o 00:03:44.326 CXX test/cpp_headers/sock.o 00:03:44.326 CXX test/cpp_headers/stdinc.o 00:03:44.326 CXX test/cpp_headers/string.o 00:03:44.326 CXX test/cpp_headers/thread.o 00:03:44.326 CXX test/cpp_headers/trace.o 00:03:44.326 CXX test/cpp_headers/trace_parser.o 00:03:44.585 LINK bdevperf 00:03:44.585 CXX test/cpp_headers/tree.o 00:03:44.585 CXX test/cpp_headers/ublk.o 00:03:44.585 CXX test/cpp_headers/util.o 00:03:44.585 CXX test/cpp_headers/uuid.o 00:03:44.585 CXX test/cpp_headers/version.o 00:03:44.585 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.585 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.585 CXX test/cpp_headers/vhost.o 00:03:44.585 CXX test/cpp_headers/vmd.o 00:03:44.585 CXX test/cpp_headers/xor.o 00:03:44.585 CXX test/cpp_headers/zipf.o 00:03:44.845 CC examples/nvmf/nvmf/nvmf.o 00:03:45.105 LINK nvmf 00:03:47.015 LINK esnap 00:03:47.015 00:03:47.015 real 1m18.703s 00:03:47.015 user 6m49.881s 00:03:47.015 sys 1m34.899s 00:03:47.015 ************************************ 00:03:47.015 END TEST make 00:03:47.015 ************************************ 00:03:47.015 16:05:01 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:47.015 16:05:01 make -- common/autotest_common.sh@10 -- $ set +x 00:03:47.015 16:05:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:47.015 16:05:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:47.015 16:05:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:47.015 16:05:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.015 16:05:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:47.015 16:05:01 -- pm/common@44 -- $ pid=5463 00:03:47.015 16:05:01 -- pm/common@50 -- $ kill -TERM 5463 00:03:47.015 16:05:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.015 16:05:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:47.015 16:05:01 -- pm/common@44 -- $ pid=5465 00:03:47.015 16:05:01 -- pm/common@50 -- $ kill -TERM 5465 00:03:47.279 16:05:01 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:47.279 16:05:01 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:47.279 16:05:01 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:47.279 16:05:01 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:47.279 16:05:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.279 16:05:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.279 16:05:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.279 16:05:01 -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.279 16:05:01 -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.279 16:05:01 -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.279 16:05:01 -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.279 16:05:01 -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.279 16:05:01 -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.279 16:05:01 -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.279 16:05:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.279 16:05:01 -- scripts/common.sh@344 -- # case "$op" in 00:03:47.279 16:05:01 -- scripts/common.sh@345 -- # : 1 00:03:47.279 16:05:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.279 16:05:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.279 16:05:01 -- scripts/common.sh@365 -- # decimal 1 00:03:47.279 16:05:01 -- scripts/common.sh@353 -- # local d=1 00:03:47.279 16:05:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.279 16:05:01 -- scripts/common.sh@355 -- # echo 1 00:03:47.279 16:05:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.279 16:05:01 -- scripts/common.sh@366 -- # decimal 2 00:03:47.279 16:05:01 -- scripts/common.sh@353 -- # local d=2 00:03:47.279 16:05:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.279 16:05:01 -- scripts/common.sh@355 -- # echo 2 00:03:47.279 16:05:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.279 16:05:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.279 16:05:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.279 16:05:01 -- scripts/common.sh@368 -- # return 0 00:03:47.279 16:05:01 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.279 16:05:01 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:47.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.279 --rc genhtml_branch_coverage=1 00:03:47.279 --rc genhtml_function_coverage=1 00:03:47.279 --rc genhtml_legend=1 00:03:47.279 --rc geninfo_all_blocks=1 00:03:47.279 --rc geninfo_unexecuted_blocks=1 00:03:47.279 00:03:47.279 ' 00:03:47.279 16:05:01 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:47.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.279 --rc genhtml_branch_coverage=1 00:03:47.279 --rc genhtml_function_coverage=1 00:03:47.279 --rc genhtml_legend=1 00:03:47.279 --rc geninfo_all_blocks=1 00:03:47.279 --rc geninfo_unexecuted_blocks=1 00:03:47.279 00:03:47.279 ' 00:03:47.279 16:05:01 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:47.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.279 --rc genhtml_branch_coverage=1 00:03:47.279 --rc genhtml_function_coverage=1 00:03:47.279 --rc genhtml_legend=1 00:03:47.279 --rc geninfo_all_blocks=1 00:03:47.279 --rc geninfo_unexecuted_blocks=1 00:03:47.279 00:03:47.279 ' 00:03:47.279 16:05:01 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:47.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.279 --rc genhtml_branch_coverage=1 00:03:47.279 --rc genhtml_function_coverage=1 00:03:47.279 --rc genhtml_legend=1 00:03:47.279 --rc geninfo_all_blocks=1 00:03:47.279 --rc geninfo_unexecuted_blocks=1 00:03:47.279 00:03:47.279 ' 00:03:47.279 16:05:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:47.279 16:05:01 -- nvmf/common.sh@7 -- # uname -s 00:03:47.279 16:05:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:47.279 16:05:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:47.279 16:05:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:47.279 16:05:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:47.279 16:05:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:47.279 16:05:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:47.279 16:05:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:47.279 16:05:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:47.279 16:05:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:47.279 16:05:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:47.279 16:05:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b0fa62cc-0be9-4e6c-a497-5229b0bef527 00:03:47.279 16:05:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=b0fa62cc-0be9-4e6c-a497-5229b0bef527 00:03:47.279 16:05:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:47.279 16:05:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:47.279 16:05:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:47.279 16:05:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:47.279 16:05:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:47.279 16:05:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:47.279 16:05:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:47.279 16:05:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.279 16:05:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.279 16:05:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.279 16:05:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.279 16:05:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.279 16:05:01 -- paths/export.sh@5 -- # export PATH 00:03:47.279 16:05:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.279 16:05:01 -- nvmf/common.sh@51 -- # : 0 00:03:47.279 16:05:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:47.279 16:05:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:47.279 16:05:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:47.279 16:05:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:47.279 16:05:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:47.279 16:05:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:47.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:47.279 16:05:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:47.279 16:05:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:47.279 16:05:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:47.279 16:05:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:47.279 16:05:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:47.279 16:05:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:47.279 16:05:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:47.279 16:05:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:47.279 16:05:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:47.279 16:05:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:47.279 16:05:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:47.540 16:05:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:47.540 16:05:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:47.540 16:05:02 -- spdk/autotest.sh@48 -- # udevadm_pid=54378 00:03:47.540 16:05:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:47.540 16:05:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:47.540 16:05:02 -- pm/common@17 -- # local monitor 00:03:47.540 16:05:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.540 16:05:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.540 16:05:02 -- pm/common@21 -- # date +%s 00:03:47.540 16:05:02 -- pm/common@25 -- # sleep 1 00:03:47.540 16:05:02 -- pm/common@21 -- # date +%s 00:03:47.540 16:05:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727539502 00:03:47.540 16:05:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727539502 00:03:47.540 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727539502_collect-cpu-load.pm.log 00:03:47.540 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727539502_collect-vmstat.pm.log 00:03:48.478 16:05:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:48.478 16:05:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:48.478 16:05:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.478 16:05:03 -- common/autotest_common.sh@10 -- # set +x 00:03:48.478 16:05:03 -- spdk/autotest.sh@59 -- # create_test_list 00:03:48.478 16:05:03 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:48.478 16:05:03 -- common/autotest_common.sh@10 -- # set +x 00:03:48.478 16:05:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:48.478 16:05:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:48.478 16:05:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:48.478 16:05:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:48.478 16:05:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:48.478 16:05:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:48.478 16:05:03 -- common/autotest_common.sh@1455 -- # uname 00:03:48.478 16:05:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:48.478 16:05:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:48.478 16:05:03 -- common/autotest_common.sh@1475 -- # uname 00:03:48.478 16:05:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:48.478 16:05:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:48.478 16:05:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:48.737 lcov: LCOV version 1.15 00:03:48.737 16:05:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:03.657 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:03.657 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:18.550 16:05:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:18.550 16:05:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.550 16:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:18.550 16:05:31 -- spdk/autotest.sh@78 -- # rm -f 00:04:18.550 16:05:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.550 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:18.550 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:18.550 16:05:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:18.550 16:05:32 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:18.550 16:05:32 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:18.550 16:05:32 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:18.550 16:05:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.550 16:05:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:18.550 16:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:18.550 16:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.550 16:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.550 16:05:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.550 16:05:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:04:18.550 16:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:04:18.550 16:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:18.550 16:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.550 16:05:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.550 16:05:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:04:18.550 16:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:04:18.550 16:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:18.550 16:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.550 16:05:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.550 16:05:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:18.550 16:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:18.550 16:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.550 16:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.550 16:05:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:18.550 16:05:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.550 16:05:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.550 16:05:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:18.550 16:05:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:18.550 16:05:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.550 No valid GPT data, bailing 00:04:18.550 16:05:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.550 16:05:32 -- scripts/common.sh@394 -- # pt= 00:04:18.550 16:05:32 -- scripts/common.sh@395 -- # return 1 00:04:18.550 16:05:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.550 1+0 records in 00:04:18.550 1+0 records out 00:04:18.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00698859 s, 150 MB/s 00:04:18.550 16:05:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.550 16:05:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.550 16:05:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:04:18.550 16:05:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:04:18.550 16:05:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:18.550 No valid GPT data, bailing 00:04:18.551 16:05:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:18.551 16:05:32 -- scripts/common.sh@394 -- # pt= 00:04:18.551 16:05:32 -- scripts/common.sh@395 -- # return 1 00:04:18.551 16:05:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:18.551 1+0 records in 00:04:18.551 1+0 records out 00:04:18.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00688944 s, 152 MB/s 00:04:18.551 16:05:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.551 16:05:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.551 16:05:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:04:18.551 16:05:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:04:18.551 16:05:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:18.551 No valid GPT data, bailing 00:04:18.551 16:05:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:18.551 16:05:32 -- scripts/common.sh@394 -- # pt= 00:04:18.551 16:05:32 -- scripts/common.sh@395 -- # return 1 00:04:18.551 16:05:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:18.551 1+0 records in 00:04:18.551 1+0 records out 00:04:18.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684904 s, 153 MB/s 00:04:18.551 16:05:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.551 16:05:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.551 16:05:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:18.551 16:05:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:18.551 16:05:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:18.551 No valid GPT data, bailing 00:04:18.551 16:05:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.551 16:05:32 -- scripts/common.sh@394 -- # pt= 00:04:18.551 16:05:32 -- scripts/common.sh@395 -- # return 1 00:04:18.551 16:05:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:18.551 1+0 records in 00:04:18.551 1+0 records out 00:04:18.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00738157 s, 142 MB/s 00:04:18.551 16:05:32 -- spdk/autotest.sh@105 -- # sync 00:04:18.551 16:05:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.551 16:05:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.551 16:05:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:20.525 16:05:35 -- spdk/autotest.sh@111 -- # uname -s 00:04:20.791 16:05:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:20.791 16:05:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:20.791 16:05:35 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:21.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.360 Hugepages 00:04:21.360 node hugesize free / total 00:04:21.619 node0 1048576kB 0 / 0 00:04:21.619 node0 2048kB 0 / 0 00:04:21.619 00:04:21.619 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.619 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:21.619 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:21.879 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:21.879 16:05:36 -- spdk/autotest.sh@117 -- # uname -s 00:04:21.879 16:05:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:21.879 16:05:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:21.879 16:05:36 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.818 16:05:37 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:23.758 16:05:38 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:23.758 16:05:38 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:23.758 16:05:38 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.758 16:05:38 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:23.758 16:05:38 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:23.758 16:05:38 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:23.758 16:05:38 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.758 16:05:38 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:23.759 16:05:38 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:24.018 16:05:38 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:24.018 16:05:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:24.018 16:05:38 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.279 Waiting for block devices as requested 00:04:24.539 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.539 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.539 16:05:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:24.539 16:05:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:24.539 16:05:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:24.539 16:05:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:24.539 16:05:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:24.539 16:05:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:24.539 16:05:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:24.539 16:05:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:24.539 16:05:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:24.539 16:05:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:24.539 16:05:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:24.539 16:05:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:24.539 16:05:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:24.799 16:05:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:24.800 16:05:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:24.800 16:05:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:24.800 16:05:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:24.800 16:05:39 -- common/autotest_common.sh@1541 -- # continue 00:04:24.800 16:05:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:24.800 16:05:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:24.800 16:05:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:24.800 16:05:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:24.800 16:05:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:24.800 16:05:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:24.800 16:05:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:24.800 16:05:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:24.800 16:05:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:24.800 16:05:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:24.800 16:05:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:24.800 16:05:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:24.800 16:05:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:24.800 16:05:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:24.800 16:05:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:24.800 16:05:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:24.800 16:05:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:24.800 16:05:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:24.800 16:05:39 -- common/autotest_common.sh@1541 -- # continue 00:04:24.800 16:05:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:24.800 16:05:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.800 16:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.800 16:05:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:24.800 16:05:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.800 16:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.800 16:05:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.738 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.738 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.738 16:05:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:25.738 16:05:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.738 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.738 16:05:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:25.738 16:05:40 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:25.738 16:05:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.738 16:05:40 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:25.738 16:05:40 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:25.738 16:05:40 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:25.738 16:05:40 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:25.738 16:05:40 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:25.738 16:05:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:25.738 16:05:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:25.738 16:05:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.998 16:05:40 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.998 16:05:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:25.998 16:05:40 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:25.998 16:05:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:25.998 16:05:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:25.998 16:05:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:25.998 16:05:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:25.998 16:05:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.998 16:05:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:25.998 16:05:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:25.998 16:05:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:25.998 16:05:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.998 16:05:40 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:25.998 16:05:40 -- common/autotest_common.sh@1570 -- # return 0 00:04:25.998 16:05:40 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:25.998 16:05:40 -- common/autotest_common.sh@1578 -- # return 0 00:04:25.998 16:05:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:25.998 16:05:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:25.998 16:05:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.998 16:05:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.998 16:05:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:25.998 16:05:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.998 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.998 16:05:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:25.998 16:05:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:25.998 16:05:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.998 16:05:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.998 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.998 ************************************ 00:04:25.998 START TEST env 00:04:25.998 ************************************ 00:04:25.998 16:05:40 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:25.998 * Looking for test storage... 00:04:25.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:25.998 16:05:40 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:25.998 16:05:40 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:25.998 16:05:40 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.257 16:05:40 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.257 16:05:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.257 16:05:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.257 16:05:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.257 16:05:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.257 16:05:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.257 16:05:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.257 16:05:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.257 16:05:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.257 16:05:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.257 16:05:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.257 16:05:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.257 16:05:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:26.257 16:05:40 env -- scripts/common.sh@345 -- # : 1 00:04:26.257 16:05:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.257 16:05:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.257 16:05:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:26.257 16:05:40 env -- scripts/common.sh@353 -- # local d=1 00:04:26.257 16:05:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.257 16:05:40 env -- scripts/common.sh@355 -- # echo 1 00:04:26.257 16:05:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.257 16:05:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:26.257 16:05:40 env -- scripts/common.sh@353 -- # local d=2 00:04:26.257 16:05:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.257 16:05:40 env -- scripts/common.sh@355 -- # echo 2 00:04:26.257 16:05:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.257 16:05:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.257 16:05:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.257 16:05:40 env -- scripts/common.sh@368 -- # return 0 00:04:26.257 16:05:40 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.257 16:05:40 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.257 --rc genhtml_branch_coverage=1 00:04:26.257 --rc genhtml_function_coverage=1 00:04:26.257 --rc genhtml_legend=1 00:04:26.257 --rc geninfo_all_blocks=1 00:04:26.257 --rc geninfo_unexecuted_blocks=1 00:04:26.257 00:04:26.257 ' 00:04:26.257 16:05:40 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.257 --rc genhtml_branch_coverage=1 00:04:26.257 --rc genhtml_function_coverage=1 00:04:26.257 --rc genhtml_legend=1 00:04:26.257 --rc geninfo_all_blocks=1 00:04:26.257 --rc geninfo_unexecuted_blocks=1 00:04:26.257 00:04:26.257 ' 00:04:26.257 16:05:40 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.257 --rc genhtml_branch_coverage=1 00:04:26.257 --rc genhtml_function_coverage=1 00:04:26.257 --rc genhtml_legend=1 00:04:26.257 --rc geninfo_all_blocks=1 00:04:26.257 --rc geninfo_unexecuted_blocks=1 00:04:26.257 00:04:26.257 ' 00:04:26.258 16:05:40 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.258 --rc genhtml_branch_coverage=1 00:04:26.258 --rc genhtml_function_coverage=1 00:04:26.258 --rc genhtml_legend=1 00:04:26.258 --rc geninfo_all_blocks=1 00:04:26.258 --rc geninfo_unexecuted_blocks=1 00:04:26.258 00:04:26.258 ' 00:04:26.258 16:05:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:26.258 16:05:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.258 16:05:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.258 16:05:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.258 ************************************ 00:04:26.258 START TEST env_memory 00:04:26.258 ************************************ 00:04:26.258 16:05:40 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:26.258 00:04:26.258 00:04:26.258 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.258 http://cunit.sourceforge.net/ 00:04:26.258 00:04:26.258 00:04:26.258 Suite: memory 00:04:26.258 Test: alloc and free memory map ...[2024-09-28 16:05:40.842814] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:26.258 passed 00:04:26.258 Test: mem map translation ...[2024-09-28 16:05:40.883973] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:26.258 [2024-09-28 16:05:40.884011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:26.258 [2024-09-28 16:05:40.884086] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:26.258 [2024-09-28 16:05:40.884104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:26.258 passed 00:04:26.518 Test: mem map registration ...[2024-09-28 16:05:40.945300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:26.518 [2024-09-28 16:05:40.945341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:26.518 passed 00:04:26.518 Test: mem map adjacent registrations ...passed 00:04:26.518 00:04:26.518 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.518 suites 1 1 n/a 0 0 00:04:26.518 tests 4 4 4 0 0 00:04:26.518 asserts 152 152 152 0 n/a 00:04:26.518 00:04:26.518 Elapsed time = 0.222 seconds 00:04:26.518 00:04:26.518 real 0m0.272s 00:04:26.518 user 0m0.241s 00:04:26.518 sys 0m0.021s 00:04:26.518 16:05:41 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.518 16:05:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:26.518 ************************************ 00:04:26.518 END TEST env_memory 00:04:26.518 ************************************ 00:04:26.518 16:05:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.518 16:05:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.518 16:05:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.518 16:05:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.518 ************************************ 00:04:26.518 START TEST env_vtophys 00:04:26.518 ************************************ 00:04:26.518 16:05:41 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.518 EAL: lib.eal log level changed from notice to debug 00:04:26.518 EAL: Detected lcore 0 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 1 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 2 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 3 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 4 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 5 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 6 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 7 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 8 as core 0 on socket 0 00:04:26.518 EAL: Detected lcore 9 as core 0 on socket 0 00:04:26.518 EAL: Maximum logical cores by configuration: 128 00:04:26.518 EAL: Detected CPU lcores: 10 00:04:26.518 EAL: Detected NUMA nodes: 1 00:04:26.518 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:26.518 EAL: Detected shared linkage of DPDK 00:04:26.518 EAL: No shared files mode enabled, IPC will be disabled 00:04:26.778 EAL: Selected IOVA mode 'PA' 00:04:26.778 EAL: Probing VFIO support... 00:04:26.778 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.778 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:26.778 EAL: Ask a virtual area of 0x2e000 bytes 00:04:26.778 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:26.778 EAL: Setting up physically contiguous memory... 00:04:26.778 EAL: Setting maximum number of open files to 524288 00:04:26.778 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:26.778 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:26.778 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.778 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:26.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.778 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.778 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:26.778 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:26.778 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.778 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:26.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.778 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.778 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:26.778 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:26.778 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.778 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:26.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.778 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.778 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:26.778 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:26.778 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.778 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:26.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.778 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.778 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:26.778 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:26.778 EAL: Hugepages will be freed exactly as allocated. 00:04:26.778 EAL: No shared files mode enabled, IPC is disabled 00:04:26.778 EAL: No shared files mode enabled, IPC is disabled 00:04:26.778 EAL: TSC frequency is ~2290000 KHz 00:04:26.778 EAL: Main lcore 0 is ready (tid=7f9a4564fa40;cpuset=[0]) 00:04:26.778 EAL: Trying to obtain current memory policy. 00:04:26.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.778 EAL: Restoring previous memory policy: 0 00:04:26.778 EAL: request: mp_malloc_sync 00:04:26.778 EAL: No shared files mode enabled, IPC is disabled 00:04:26.778 EAL: Heap on socket 0 was expanded by 2MB 00:04:26.778 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.778 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.778 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.778 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:26.778 00:04:26.778 00:04:26.778 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.778 http://cunit.sourceforge.net/ 00:04:26.778 00:04:26.778 00:04:26.778 Suite: components_suite 00:04:27.038 Test: vtophys_malloc_test ...passed 00:04:27.038 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:27.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.038 EAL: Restoring previous memory policy: 4 00:04:27.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.038 EAL: request: mp_malloc_sync 00:04:27.038 EAL: No shared files mode enabled, IPC is disabled 00:04:27.038 EAL: Heap on socket 0 was expanded by 4MB 00:04:27.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.038 EAL: request: mp_malloc_sync 00:04:27.038 EAL: No shared files mode enabled, IPC is disabled 00:04:27.038 EAL: Heap on socket 0 was shrunk by 4MB 00:04:27.038 EAL: Trying to obtain current memory policy. 00:04:27.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.038 EAL: Restoring previous memory policy: 4 00:04:27.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.038 EAL: request: mp_malloc_sync 00:04:27.038 EAL: No shared files mode enabled, IPC is disabled 00:04:27.038 EAL: Heap on socket 0 was expanded by 6MB 00:04:27.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.038 EAL: request: mp_malloc_sync 00:04:27.038 EAL: No shared files mode enabled, IPC is disabled 00:04:27.038 EAL: Heap on socket 0 was shrunk by 6MB 00:04:27.038 EAL: Trying to obtain current memory policy. 00:04:27.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.038 EAL: Restoring previous memory policy: 4 00:04:27.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.038 EAL: request: mp_malloc_sync 00:04:27.038 EAL: No shared files mode enabled, IPC is disabled 00:04:27.038 EAL: Heap on socket 0 was expanded by 10MB 00:04:27.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.038 EAL: request: mp_malloc_sync 00:04:27.038 EAL: No shared files mode enabled, IPC is disabled 00:04:27.038 EAL: Heap on socket 0 was shrunk by 10MB 00:04:27.038 EAL: Trying to obtain current memory policy. 00:04:27.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.038 EAL: Restoring previous memory policy: 4 00:04:27.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.038 EAL: request: mp_malloc_sync 00:04:27.038 EAL: No shared files mode enabled, IPC is disabled 00:04:27.038 EAL: Heap on socket 0 was expanded by 18MB 00:04:27.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.298 EAL: request: mp_malloc_sync 00:04:27.298 EAL: No shared files mode enabled, IPC is disabled 00:04:27.298 EAL: Heap on socket 0 was shrunk by 18MB 00:04:27.298 EAL: Trying to obtain current memory policy. 00:04:27.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.298 EAL: Restoring previous memory policy: 4 00:04:27.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.298 EAL: request: mp_malloc_sync 00:04:27.298 EAL: No shared files mode enabled, IPC is disabled 00:04:27.298 EAL: Heap on socket 0 was expanded by 34MB 00:04:27.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.298 EAL: request: mp_malloc_sync 00:04:27.298 EAL: No shared files mode enabled, IPC is disabled 00:04:27.298 EAL: Heap on socket 0 was shrunk by 34MB 00:04:27.298 EAL: Trying to obtain current memory policy. 00:04:27.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.298 EAL: Restoring previous memory policy: 4 00:04:27.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.298 EAL: request: mp_malloc_sync 00:04:27.298 EAL: No shared files mode enabled, IPC is disabled 00:04:27.298 EAL: Heap on socket 0 was expanded by 66MB 00:04:27.558 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.558 EAL: request: mp_malloc_sync 00:04:27.558 EAL: No shared files mode enabled, IPC is disabled 00:04:27.558 EAL: Heap on socket 0 was shrunk by 66MB 00:04:27.558 EAL: Trying to obtain current memory policy. 00:04:27.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.558 EAL: Restoring previous memory policy: 4 00:04:27.558 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.558 EAL: request: mp_malloc_sync 00:04:27.558 EAL: No shared files mode enabled, IPC is disabled 00:04:27.558 EAL: Heap on socket 0 was expanded by 130MB 00:04:27.817 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.817 EAL: request: mp_malloc_sync 00:04:27.817 EAL: No shared files mode enabled, IPC is disabled 00:04:27.817 EAL: Heap on socket 0 was shrunk by 130MB 00:04:28.075 EAL: Trying to obtain current memory policy. 00:04:28.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.075 EAL: Restoring previous memory policy: 4 00:04:28.075 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.075 EAL: request: mp_malloc_sync 00:04:28.075 EAL: No shared files mode enabled, IPC is disabled 00:04:28.075 EAL: Heap on socket 0 was expanded by 258MB 00:04:28.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.643 EAL: request: mp_malloc_sync 00:04:28.643 EAL: No shared files mode enabled, IPC is disabled 00:04:28.643 EAL: Heap on socket 0 was shrunk by 258MB 00:04:28.903 EAL: Trying to obtain current memory policy. 00:04:28.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.903 EAL: Restoring previous memory policy: 4 00:04:28.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.903 EAL: request: mp_malloc_sync 00:04:28.903 EAL: No shared files mode enabled, IPC is disabled 00:04:28.903 EAL: Heap on socket 0 was expanded by 514MB 00:04:29.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.840 EAL: request: mp_malloc_sync 00:04:29.840 EAL: No shared files mode enabled, IPC is disabled 00:04:29.840 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.776 EAL: Trying to obtain current memory policy. 00:04:30.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.035 EAL: Restoring previous memory policy: 4 00:04:31.035 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.035 EAL: request: mp_malloc_sync 00:04:31.035 EAL: No shared files mode enabled, IPC is disabled 00:04:31.035 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.940 EAL: request: mp_malloc_sync 00:04:32.940 EAL: No shared files mode enabled, IPC is disabled 00:04:32.940 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:34.318 passed 00:04:34.318 00:04:34.318 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.318 suites 1 1 n/a 0 0 00:04:34.318 tests 2 2 2 0 0 00:04:34.318 asserts 5831 5831 5831 0 n/a 00:04:34.318 00:04:34.318 Elapsed time = 7.537 seconds 00:04:34.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.318 EAL: request: mp_malloc_sync 00:04:34.318 EAL: No shared files mode enabled, IPC is disabled 00:04:34.318 EAL: Heap on socket 0 was shrunk by 2MB 00:04:34.318 EAL: No shared files mode enabled, IPC is disabled 00:04:34.318 EAL: No shared files mode enabled, IPC is disabled 00:04:34.318 EAL: No shared files mode enabled, IPC is disabled 00:04:34.318 00:04:34.318 real 0m7.859s 00:04:34.318 user 0m6.912s 00:04:34.318 sys 0m0.792s 00:04:34.318 16:05:48 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.318 16:05:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:34.318 ************************************ 00:04:34.318 END TEST env_vtophys 00:04:34.318 ************************************ 00:04:34.578 16:05:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:34.578 16:05:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.578 16:05:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.578 16:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.578 ************************************ 00:04:34.578 START TEST env_pci 00:04:34.578 ************************************ 00:04:34.578 16:05:49 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:34.578 00:04:34.578 00:04:34.578 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.578 http://cunit.sourceforge.net/ 00:04:34.578 00:04:34.578 00:04:34.578 Suite: pci 00:04:34.578 Test: pci_hook ...[2024-09-28 16:05:49.088103] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56666 has claimed it 00:04:34.578 passed 00:04:34.578 00:04:34.578 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.578 suites 1 1 n/a 0 0 00:04:34.578 tests 1 1 1 0 0 00:04:34.578 asserts 25 25 25 0 n/a 00:04:34.578 00:04:34.578 Elapsed time = 0.006 seconds 00:04:34.578 EAL: Cannot find device (10000:00:01.0) 00:04:34.578 EAL: Failed to attach device on primary process 00:04:34.578 00:04:34.578 real 0m0.106s 00:04:34.578 user 0m0.042s 00:04:34.578 sys 0m0.062s 00:04:34.578 16:05:49 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.578 16:05:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:34.578 ************************************ 00:04:34.578 END TEST env_pci 00:04:34.578 ************************************ 00:04:34.578 16:05:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:34.578 16:05:49 env -- env/env.sh@15 -- # uname 00:04:34.578 16:05:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:34.578 16:05:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:34.578 16:05:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.578 16:05:49 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:34.578 16:05:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.578 16:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.578 ************************************ 00:04:34.578 START TEST env_dpdk_post_init 00:04:34.578 ************************************ 00:04:34.578 16:05:49 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.838 EAL: Detected CPU lcores: 10 00:04:34.838 EAL: Detected NUMA nodes: 1 00:04:34.838 EAL: Detected shared linkage of DPDK 00:04:34.838 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.838 EAL: Selected IOVA mode 'PA' 00:04:34.838 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.838 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:34.838 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:34.838 Starting DPDK initialization... 00:04:34.838 Starting SPDK post initialization... 00:04:34.838 SPDK NVMe probe 00:04:34.838 Attaching to 0000:00:10.0 00:04:34.838 Attaching to 0000:00:11.0 00:04:34.838 Attached to 0000:00:10.0 00:04:34.838 Attached to 0000:00:11.0 00:04:34.838 Cleaning up... 00:04:34.838 00:04:34.838 real 0m0.269s 00:04:34.838 user 0m0.081s 00:04:34.838 sys 0m0.089s 00:04:34.838 16:05:49 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.838 16:05:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.838 ************************************ 00:04:34.838 END TEST env_dpdk_post_init 00:04:34.838 ************************************ 00:04:35.097 16:05:49 env -- env/env.sh@26 -- # uname 00:04:35.097 16:05:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.097 16:05:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.097 16:05:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.097 16:05:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.097 16:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.097 ************************************ 00:04:35.097 START TEST env_mem_callbacks 00:04:35.097 ************************************ 00:04:35.097 16:05:49 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.097 EAL: Detected CPU lcores: 10 00:04:35.097 EAL: Detected NUMA nodes: 1 00:04:35.097 EAL: Detected shared linkage of DPDK 00:04:35.097 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.097 EAL: Selected IOVA mode 'PA' 00:04:35.097 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.097 00:04:35.097 00:04:35.097 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.097 http://cunit.sourceforge.net/ 00:04:35.097 00:04:35.097 00:04:35.097 Suite: memory 00:04:35.097 Test: test ... 00:04:35.097 register 0x200000200000 2097152 00:04:35.097 malloc 3145728 00:04:35.097 register 0x200000400000 4194304 00:04:35.097 buf 0x2000004fffc0 len 3145728 PASSED 00:04:35.097 malloc 64 00:04:35.097 buf 0x2000004ffec0 len 64 PASSED 00:04:35.097 malloc 4194304 00:04:35.097 register 0x200000800000 6291456 00:04:35.097 buf 0x2000009fffc0 len 4194304 PASSED 00:04:35.097 free 0x2000004fffc0 3145728 00:04:35.097 free 0x2000004ffec0 64 00:04:35.097 unregister 0x200000400000 4194304 PASSED 00:04:35.097 free 0x2000009fffc0 4194304 00:04:35.098 unregister 0x200000800000 6291456 PASSED 00:04:35.357 malloc 8388608 00:04:35.357 register 0x200000400000 10485760 00:04:35.357 buf 0x2000005fffc0 len 8388608 PASSED 00:04:35.357 free 0x2000005fffc0 8388608 00:04:35.357 unregister 0x200000400000 10485760 PASSED 00:04:35.357 passed 00:04:35.357 00:04:35.357 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.357 suites 1 1 n/a 0 0 00:04:35.357 tests 1 1 1 0 0 00:04:35.357 asserts 15 15 15 0 n/a 00:04:35.357 00:04:35.357 Elapsed time = 0.080 seconds 00:04:35.357 00:04:35.357 real 0m0.274s 00:04:35.357 user 0m0.109s 00:04:35.357 sys 0m0.063s 00:04:35.357 16:05:49 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.357 16:05:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:35.357 ************************************ 00:04:35.357 END TEST env_mem_callbacks 00:04:35.357 ************************************ 00:04:35.357 00:04:35.357 real 0m9.348s 00:04:35.357 user 0m7.613s 00:04:35.357 sys 0m1.385s 00:04:35.357 16:05:49 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.357 16:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.357 ************************************ 00:04:35.357 END TEST env 00:04:35.357 ************************************ 00:04:35.357 16:05:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.357 16:05:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.357 16:05:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.357 16:05:49 -- common/autotest_common.sh@10 -- # set +x 00:04:35.357 ************************************ 00:04:35.357 START TEST rpc 00:04:35.357 ************************************ 00:04:35.357 16:05:49 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.617 * Looking for test storage... 00:04:35.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.617 16:05:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.617 16:05:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.617 16:05:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.617 16:05:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.617 16:05:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.617 16:05:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.617 16:05:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.617 16:05:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.617 16:05:50 rpc -- scripts/common.sh@345 -- # : 1 00:04:35.617 16:05:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.617 16:05:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.617 16:05:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.617 16:05:50 rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.617 16:05:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.617 16:05:50 rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.617 16:05:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.617 16:05:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.617 16:05:50 rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.617 16:05:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.617 16:05:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.617 16:05:50 rpc -- scripts/common.sh@368 -- # return 0 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 16:05:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56793 00:04:35.617 16:05:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:35.617 16:05:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.617 16:05:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56793 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@831 -- # '[' -z 56793 ']' 00:04:35.617 16:05:50 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.618 16:05:50 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.618 16:05:50 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.618 16:05:50 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.618 16:05:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.618 [2024-09-28 16:05:50.276159] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:35.618 [2024-09-28 16:05:50.276302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56793 ] 00:04:35.877 [2024-09-28 16:05:50.441675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.136 [2024-09-28 16:05:50.650599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.136 [2024-09-28 16:05:50.650654] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56793' to capture a snapshot of events at runtime. 00:04:36.136 [2024-09-28 16:05:50.650686] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.136 [2024-09-28 16:05:50.650696] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.136 [2024-09-28 16:05:50.650704] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56793 for offline analysis/debug. 00:04:36.136 [2024-09-28 16:05:50.650745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.074 16:05:51 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.074 16:05:51 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:37.074 16:05:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.074 16:05:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.074 16:05:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.074 16:05:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.074 16:05:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.074 16:05:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.074 16:05:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.074 ************************************ 00:04:37.074 START TEST rpc_integrity 00:04:37.074 ************************************ 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.074 { 00:04:37.074 "name": "Malloc0", 00:04:37.074 "aliases": [ 00:04:37.074 "9a6b1230-ae74-4d89-b17f-d46ae1daf66e" 00:04:37.074 ], 00:04:37.074 "product_name": "Malloc disk", 00:04:37.074 "block_size": 512, 00:04:37.074 "num_blocks": 16384, 00:04:37.074 "uuid": "9a6b1230-ae74-4d89-b17f-d46ae1daf66e", 00:04:37.074 "assigned_rate_limits": { 00:04:37.074 "rw_ios_per_sec": 0, 00:04:37.074 "rw_mbytes_per_sec": 0, 00:04:37.074 "r_mbytes_per_sec": 0, 00:04:37.074 "w_mbytes_per_sec": 0 00:04:37.074 }, 00:04:37.074 "claimed": false, 00:04:37.074 "zoned": false, 00:04:37.074 "supported_io_types": { 00:04:37.074 "read": true, 00:04:37.074 "write": true, 00:04:37.074 "unmap": true, 00:04:37.074 "flush": true, 00:04:37.074 "reset": true, 00:04:37.074 "nvme_admin": false, 00:04:37.074 "nvme_io": false, 00:04:37.074 "nvme_io_md": false, 00:04:37.074 "write_zeroes": true, 00:04:37.074 "zcopy": true, 00:04:37.074 "get_zone_info": false, 00:04:37.074 "zone_management": false, 00:04:37.074 "zone_append": false, 00:04:37.074 "compare": false, 00:04:37.074 "compare_and_write": false, 00:04:37.074 "abort": true, 00:04:37.074 "seek_hole": false, 00:04:37.074 "seek_data": false, 00:04:37.074 "copy": true, 00:04:37.074 "nvme_iov_md": false 00:04:37.074 }, 00:04:37.074 "memory_domains": [ 00:04:37.074 { 00:04:37.074 "dma_device_id": "system", 00:04:37.074 "dma_device_type": 1 00:04:37.074 }, 00:04:37.074 { 00:04:37.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.074 "dma_device_type": 2 00:04:37.074 } 00:04:37.074 ], 00:04:37.074 "driver_specific": {} 00:04:37.074 } 00:04:37.074 ]' 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.074 [2024-09-28 16:05:51.626967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.074 [2024-09-28 16:05:51.627025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.074 [2024-09-28 16:05:51.627048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:37.074 [2024-09-28 16:05:51.627060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.074 [2024-09-28 16:05:51.629243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.074 [2024-09-28 16:05:51.629281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.074 Passthru0 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.074 { 00:04:37.074 "name": "Malloc0", 00:04:37.074 "aliases": [ 00:04:37.074 "9a6b1230-ae74-4d89-b17f-d46ae1daf66e" 00:04:37.074 ], 00:04:37.074 "product_name": "Malloc disk", 00:04:37.074 "block_size": 512, 00:04:37.074 "num_blocks": 16384, 00:04:37.074 "uuid": "9a6b1230-ae74-4d89-b17f-d46ae1daf66e", 00:04:37.074 "assigned_rate_limits": { 00:04:37.074 "rw_ios_per_sec": 0, 00:04:37.074 "rw_mbytes_per_sec": 0, 00:04:37.074 "r_mbytes_per_sec": 0, 00:04:37.074 "w_mbytes_per_sec": 0 00:04:37.074 }, 00:04:37.074 "claimed": true, 00:04:37.074 "claim_type": "exclusive_write", 00:04:37.074 "zoned": false, 00:04:37.074 "supported_io_types": { 00:04:37.074 "read": true, 00:04:37.074 "write": true, 00:04:37.074 "unmap": true, 00:04:37.074 "flush": true, 00:04:37.074 "reset": true, 00:04:37.074 "nvme_admin": false, 00:04:37.074 "nvme_io": false, 00:04:37.074 "nvme_io_md": false, 00:04:37.074 "write_zeroes": true, 00:04:37.074 "zcopy": true, 00:04:37.074 "get_zone_info": false, 00:04:37.074 "zone_management": false, 00:04:37.074 "zone_append": false, 00:04:37.074 "compare": false, 00:04:37.074 "compare_and_write": false, 00:04:37.074 "abort": true, 00:04:37.074 "seek_hole": false, 00:04:37.074 "seek_data": false, 00:04:37.074 "copy": true, 00:04:37.074 "nvme_iov_md": false 00:04:37.074 }, 00:04:37.074 "memory_domains": [ 00:04:37.074 { 00:04:37.074 "dma_device_id": "system", 00:04:37.074 "dma_device_type": 1 00:04:37.074 }, 00:04:37.074 { 00:04:37.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.074 "dma_device_type": 2 00:04:37.074 } 00:04:37.074 ], 00:04:37.074 "driver_specific": {} 00:04:37.074 }, 00:04:37.074 { 00:04:37.074 "name": "Passthru0", 00:04:37.074 "aliases": [ 00:04:37.074 "acb8557d-3fb9-5552-b8f1-ccd5b053c5c9" 00:04:37.074 ], 00:04:37.074 "product_name": "passthru", 00:04:37.074 "block_size": 512, 00:04:37.074 "num_blocks": 16384, 00:04:37.074 "uuid": "acb8557d-3fb9-5552-b8f1-ccd5b053c5c9", 00:04:37.074 "assigned_rate_limits": { 00:04:37.074 "rw_ios_per_sec": 0, 00:04:37.074 "rw_mbytes_per_sec": 0, 00:04:37.074 "r_mbytes_per_sec": 0, 00:04:37.074 "w_mbytes_per_sec": 0 00:04:37.074 }, 00:04:37.074 "claimed": false, 00:04:37.074 "zoned": false, 00:04:37.074 "supported_io_types": { 00:04:37.074 "read": true, 00:04:37.074 "write": true, 00:04:37.074 "unmap": true, 00:04:37.074 "flush": true, 00:04:37.074 "reset": true, 00:04:37.074 "nvme_admin": false, 00:04:37.074 "nvme_io": false, 00:04:37.074 "nvme_io_md": false, 00:04:37.074 "write_zeroes": true, 00:04:37.074 "zcopy": true, 00:04:37.074 "get_zone_info": false, 00:04:37.074 "zone_management": false, 00:04:37.074 "zone_append": false, 00:04:37.074 "compare": false, 00:04:37.074 "compare_and_write": false, 00:04:37.074 "abort": true, 00:04:37.074 "seek_hole": false, 00:04:37.074 "seek_data": false, 00:04:37.074 "copy": true, 00:04:37.074 "nvme_iov_md": false 00:04:37.074 }, 00:04:37.074 "memory_domains": [ 00:04:37.074 { 00:04:37.074 "dma_device_id": "system", 00:04:37.074 "dma_device_type": 1 00:04:37.074 }, 00:04:37.074 { 00:04:37.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.074 "dma_device_type": 2 00:04:37.074 } 00:04:37.074 ], 00:04:37.074 "driver_specific": { 00:04:37.074 "passthru": { 00:04:37.074 "name": "Passthru0", 00:04:37.074 "base_bdev_name": "Malloc0" 00:04:37.074 } 00:04:37.074 } 00:04:37.074 } 00:04:37.074 ]' 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.074 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.074 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.075 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.075 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.335 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.335 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.335 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.335 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.335 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.335 ************************************ 00:04:37.335 END TEST rpc_integrity 00:04:37.335 ************************************ 00:04:37.335 16:05:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.335 00:04:37.335 real 0m0.355s 00:04:37.335 user 0m0.194s 00:04:37.335 sys 0m0.055s 00:04:37.335 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.335 16:05:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 16:05:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.335 16:05:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.335 16:05:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.335 16:05:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 ************************************ 00:04:37.335 START TEST rpc_plugins 00:04:37.335 ************************************ 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.335 { 00:04:37.335 "name": "Malloc1", 00:04:37.335 "aliases": [ 00:04:37.335 "4fff14c3-e89e-43c5-8f64-4e766dbf704a" 00:04:37.335 ], 00:04:37.335 "product_name": "Malloc disk", 00:04:37.335 "block_size": 4096, 00:04:37.335 "num_blocks": 256, 00:04:37.335 "uuid": "4fff14c3-e89e-43c5-8f64-4e766dbf704a", 00:04:37.335 "assigned_rate_limits": { 00:04:37.335 "rw_ios_per_sec": 0, 00:04:37.335 "rw_mbytes_per_sec": 0, 00:04:37.335 "r_mbytes_per_sec": 0, 00:04:37.335 "w_mbytes_per_sec": 0 00:04:37.335 }, 00:04:37.335 "claimed": false, 00:04:37.335 "zoned": false, 00:04:37.335 "supported_io_types": { 00:04:37.335 "read": true, 00:04:37.335 "write": true, 00:04:37.335 "unmap": true, 00:04:37.335 "flush": true, 00:04:37.335 "reset": true, 00:04:37.335 "nvme_admin": false, 00:04:37.335 "nvme_io": false, 00:04:37.335 "nvme_io_md": false, 00:04:37.335 "write_zeroes": true, 00:04:37.335 "zcopy": true, 00:04:37.335 "get_zone_info": false, 00:04:37.335 "zone_management": false, 00:04:37.335 "zone_append": false, 00:04:37.335 "compare": false, 00:04:37.335 "compare_and_write": false, 00:04:37.335 "abort": true, 00:04:37.335 "seek_hole": false, 00:04:37.335 "seek_data": false, 00:04:37.335 "copy": true, 00:04:37.335 "nvme_iov_md": false 00:04:37.335 }, 00:04:37.335 "memory_domains": [ 00:04:37.335 { 00:04:37.335 "dma_device_id": "system", 00:04:37.335 "dma_device_type": 1 00:04:37.335 }, 00:04:37.335 { 00:04:37.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.335 "dma_device_type": 2 00:04:37.335 } 00:04:37.335 ], 00:04:37.335 "driver_specific": {} 00:04:37.335 } 00:04:37.335 ]' 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.335 16:05:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.335 16:05:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.595 16:05:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.595 00:04:37.595 real 0m0.155s 00:04:37.595 user 0m0.093s 00:04:37.595 sys 0m0.024s 00:04:37.595 16:05:52 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.595 16:05:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.595 ************************************ 00:04:37.595 END TEST rpc_plugins 00:04:37.595 ************************************ 00:04:37.595 16:05:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.595 16:05:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.595 16:05:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.595 16:05:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.595 ************************************ 00:04:37.595 START TEST rpc_trace_cmd_test 00:04:37.595 ************************************ 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:37.595 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56793", 00:04:37.595 "tpoint_group_mask": "0x8", 00:04:37.595 "iscsi_conn": { 00:04:37.595 "mask": "0x2", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "scsi": { 00:04:37.595 "mask": "0x4", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "bdev": { 00:04:37.595 "mask": "0x8", 00:04:37.595 "tpoint_mask": "0xffffffffffffffff" 00:04:37.595 }, 00:04:37.595 "nvmf_rdma": { 00:04:37.595 "mask": "0x10", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "nvmf_tcp": { 00:04:37.595 "mask": "0x20", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "ftl": { 00:04:37.595 "mask": "0x40", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "blobfs": { 00:04:37.595 "mask": "0x80", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "dsa": { 00:04:37.595 "mask": "0x200", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "thread": { 00:04:37.595 "mask": "0x400", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "nvme_pcie": { 00:04:37.595 "mask": "0x800", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "iaa": { 00:04:37.595 "mask": "0x1000", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "nvme_tcp": { 00:04:37.595 "mask": "0x2000", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "bdev_nvme": { 00:04:37.595 "mask": "0x4000", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "sock": { 00:04:37.595 "mask": "0x8000", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "blob": { 00:04:37.595 "mask": "0x10000", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 }, 00:04:37.595 "bdev_raid": { 00:04:37.595 "mask": "0x20000", 00:04:37.595 "tpoint_mask": "0x0" 00:04:37.595 } 00:04:37.595 }' 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.595 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.855 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.855 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.855 16:05:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.855 00:04:37.855 real 0m0.270s 00:04:37.855 user 0m0.216s 00:04:37.855 sys 0m0.043s 00:04:37.855 16:05:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.855 16:05:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.855 ************************************ 00:04:37.855 END TEST rpc_trace_cmd_test 00:04:37.855 ************************************ 00:04:37.855 16:05:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.855 16:05:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.855 16:05:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.855 16:05:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.855 16:05:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.855 16:05:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.855 ************************************ 00:04:37.855 START TEST rpc_daemon_integrity 00:04:37.855 ************************************ 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.855 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.855 { 00:04:37.855 "name": "Malloc2", 00:04:37.855 "aliases": [ 00:04:37.855 "5255d7c3-4e53-46b8-b2c0-debcd72db04f" 00:04:37.855 ], 00:04:37.855 "product_name": "Malloc disk", 00:04:37.855 "block_size": 512, 00:04:37.855 "num_blocks": 16384, 00:04:37.855 "uuid": "5255d7c3-4e53-46b8-b2c0-debcd72db04f", 00:04:37.855 "assigned_rate_limits": { 00:04:37.855 "rw_ios_per_sec": 0, 00:04:37.855 "rw_mbytes_per_sec": 0, 00:04:37.855 "r_mbytes_per_sec": 0, 00:04:37.855 "w_mbytes_per_sec": 0 00:04:37.855 }, 00:04:37.855 "claimed": false, 00:04:37.855 "zoned": false, 00:04:37.855 "supported_io_types": { 00:04:37.855 "read": true, 00:04:37.855 "write": true, 00:04:37.855 "unmap": true, 00:04:37.855 "flush": true, 00:04:37.855 "reset": true, 00:04:37.855 "nvme_admin": false, 00:04:37.855 "nvme_io": false, 00:04:37.855 "nvme_io_md": false, 00:04:37.855 "write_zeroes": true, 00:04:37.855 "zcopy": true, 00:04:37.855 "get_zone_info": false, 00:04:37.855 "zone_management": false, 00:04:37.855 "zone_append": false, 00:04:37.855 "compare": false, 00:04:37.855 "compare_and_write": false, 00:04:37.855 "abort": true, 00:04:37.855 "seek_hole": false, 00:04:37.855 "seek_data": false, 00:04:37.855 "copy": true, 00:04:37.855 "nvme_iov_md": false 00:04:37.855 }, 00:04:37.855 "memory_domains": [ 00:04:37.855 { 00:04:37.855 "dma_device_id": "system", 00:04:37.855 "dma_device_type": 1 00:04:37.855 }, 00:04:37.856 { 00:04:37.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.856 "dma_device_type": 2 00:04:37.856 } 00:04:37.856 ], 00:04:37.856 "driver_specific": {} 00:04:37.856 } 00:04:37.856 ]' 00:04:37.856 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.116 [2024-09-28 16:05:52.585808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.116 [2024-09-28 16:05:52.585864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.116 [2024-09-28 16:05:52.585882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:38.116 [2024-09-28 16:05:52.585893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.116 [2024-09-28 16:05:52.587966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.116 [2024-09-28 16:05:52.588006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.116 Passthru0 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.116 { 00:04:38.116 "name": "Malloc2", 00:04:38.116 "aliases": [ 00:04:38.116 "5255d7c3-4e53-46b8-b2c0-debcd72db04f" 00:04:38.116 ], 00:04:38.116 "product_name": "Malloc disk", 00:04:38.116 "block_size": 512, 00:04:38.116 "num_blocks": 16384, 00:04:38.116 "uuid": "5255d7c3-4e53-46b8-b2c0-debcd72db04f", 00:04:38.116 "assigned_rate_limits": { 00:04:38.116 "rw_ios_per_sec": 0, 00:04:38.116 "rw_mbytes_per_sec": 0, 00:04:38.116 "r_mbytes_per_sec": 0, 00:04:38.116 "w_mbytes_per_sec": 0 00:04:38.116 }, 00:04:38.116 "claimed": true, 00:04:38.116 "claim_type": "exclusive_write", 00:04:38.116 "zoned": false, 00:04:38.116 "supported_io_types": { 00:04:38.116 "read": true, 00:04:38.116 "write": true, 00:04:38.116 "unmap": true, 00:04:38.116 "flush": true, 00:04:38.116 "reset": true, 00:04:38.116 "nvme_admin": false, 00:04:38.116 "nvme_io": false, 00:04:38.116 "nvme_io_md": false, 00:04:38.116 "write_zeroes": true, 00:04:38.116 "zcopy": true, 00:04:38.116 "get_zone_info": false, 00:04:38.116 "zone_management": false, 00:04:38.116 "zone_append": false, 00:04:38.116 "compare": false, 00:04:38.116 "compare_and_write": false, 00:04:38.116 "abort": true, 00:04:38.116 "seek_hole": false, 00:04:38.116 "seek_data": false, 00:04:38.116 "copy": true, 00:04:38.116 "nvme_iov_md": false 00:04:38.116 }, 00:04:38.116 "memory_domains": [ 00:04:38.116 { 00:04:38.116 "dma_device_id": "system", 00:04:38.116 "dma_device_type": 1 00:04:38.116 }, 00:04:38.116 { 00:04:38.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.116 "dma_device_type": 2 00:04:38.116 } 00:04:38.116 ], 00:04:38.116 "driver_specific": {} 00:04:38.116 }, 00:04:38.116 { 00:04:38.116 "name": "Passthru0", 00:04:38.116 "aliases": [ 00:04:38.116 "3cc8ef5d-0e96-5501-acba-170aff3a3345" 00:04:38.116 ], 00:04:38.116 "product_name": "passthru", 00:04:38.116 "block_size": 512, 00:04:38.116 "num_blocks": 16384, 00:04:38.116 "uuid": "3cc8ef5d-0e96-5501-acba-170aff3a3345", 00:04:38.116 "assigned_rate_limits": { 00:04:38.116 "rw_ios_per_sec": 0, 00:04:38.116 "rw_mbytes_per_sec": 0, 00:04:38.116 "r_mbytes_per_sec": 0, 00:04:38.116 "w_mbytes_per_sec": 0 00:04:38.116 }, 00:04:38.116 "claimed": false, 00:04:38.116 "zoned": false, 00:04:38.116 "supported_io_types": { 00:04:38.116 "read": true, 00:04:38.116 "write": true, 00:04:38.116 "unmap": true, 00:04:38.116 "flush": true, 00:04:38.116 "reset": true, 00:04:38.116 "nvme_admin": false, 00:04:38.116 "nvme_io": false, 00:04:38.116 "nvme_io_md": false, 00:04:38.116 "write_zeroes": true, 00:04:38.116 "zcopy": true, 00:04:38.116 "get_zone_info": false, 00:04:38.116 "zone_management": false, 00:04:38.116 "zone_append": false, 00:04:38.116 "compare": false, 00:04:38.116 "compare_and_write": false, 00:04:38.116 "abort": true, 00:04:38.116 "seek_hole": false, 00:04:38.116 "seek_data": false, 00:04:38.116 "copy": true, 00:04:38.116 "nvme_iov_md": false 00:04:38.116 }, 00:04:38.116 "memory_domains": [ 00:04:38.116 { 00:04:38.116 "dma_device_id": "system", 00:04:38.116 "dma_device_type": 1 00:04:38.116 }, 00:04:38.116 { 00:04:38.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.116 "dma_device_type": 2 00:04:38.116 } 00:04:38.116 ], 00:04:38.116 "driver_specific": { 00:04:38.116 "passthru": { 00:04:38.116 "name": "Passthru0", 00:04:38.116 "base_bdev_name": "Malloc2" 00:04:38.116 } 00:04:38.116 } 00:04:38.116 } 00:04:38.116 ]' 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.116 00:04:38.116 real 0m0.347s 00:04:38.116 user 0m0.198s 00:04:38.116 sys 0m0.050s 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.116 16:05:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.116 ************************************ 00:04:38.116 END TEST rpc_daemon_integrity 00:04:38.116 ************************************ 00:04:38.376 16:05:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.376 16:05:52 rpc -- rpc/rpc.sh@84 -- # killprocess 56793 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@950 -- # '[' -z 56793 ']' 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@954 -- # kill -0 56793 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@955 -- # uname 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56793 00:04:38.376 killing process with pid 56793 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56793' 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@969 -- # kill 56793 00:04:38.376 16:05:52 rpc -- common/autotest_common.sh@974 -- # wait 56793 00:04:40.915 00:04:40.915 real 0m5.312s 00:04:40.915 user 0m5.842s 00:04:40.915 sys 0m0.919s 00:04:40.915 16:05:55 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.915 16:05:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.915 ************************************ 00:04:40.915 END TEST rpc 00:04:40.915 ************************************ 00:04:40.915 16:05:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.915 16:05:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.915 16:05:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.915 16:05:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.915 ************************************ 00:04:40.915 START TEST skip_rpc 00:04:40.915 ************************************ 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.915 * Looking for test storage... 00:04:40.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.915 16:05:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:40.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.915 --rc genhtml_branch_coverage=1 00:04:40.915 --rc genhtml_function_coverage=1 00:04:40.915 --rc genhtml_legend=1 00:04:40.915 --rc geninfo_all_blocks=1 00:04:40.915 --rc geninfo_unexecuted_blocks=1 00:04:40.915 00:04:40.915 ' 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:40.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.915 --rc genhtml_branch_coverage=1 00:04:40.915 --rc genhtml_function_coverage=1 00:04:40.915 --rc genhtml_legend=1 00:04:40.915 --rc geninfo_all_blocks=1 00:04:40.915 --rc geninfo_unexecuted_blocks=1 00:04:40.915 00:04:40.915 ' 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:40.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.915 --rc genhtml_branch_coverage=1 00:04:40.915 --rc genhtml_function_coverage=1 00:04:40.915 --rc genhtml_legend=1 00:04:40.915 --rc geninfo_all_blocks=1 00:04:40.915 --rc geninfo_unexecuted_blocks=1 00:04:40.915 00:04:40.915 ' 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:40.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.915 --rc genhtml_branch_coverage=1 00:04:40.915 --rc genhtml_function_coverage=1 00:04:40.915 --rc genhtml_legend=1 00:04:40.915 --rc geninfo_all_blocks=1 00:04:40.915 --rc geninfo_unexecuted_blocks=1 00:04:40.915 00:04:40.915 ' 00:04:40.915 16:05:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.915 16:05:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.915 16:05:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.915 16:05:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.916 ************************************ 00:04:40.916 START TEST skip_rpc 00:04:40.916 ************************************ 00:04:40.916 16:05:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:40.916 16:05:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57027 00:04:40.916 16:05:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.916 16:05:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.916 16:05:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.176 [2024-09-28 16:05:55.651009] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:41.176 [2024-09-28 16:05:55.651102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57027 ] 00:04:41.176 [2024-09-28 16:05:55.815603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.434 [2024-09-28 16:05:56.024241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57027 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57027 ']' 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57027 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57027 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.757 killing process with pid 57027 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57027' 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57027 00:04:46.757 16:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57027 00:04:48.661 00:04:48.661 real 0m7.456s 00:04:48.661 user 0m6.993s 00:04:48.661 sys 0m0.381s 00:04:48.661 16:06:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.661 16:06:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.661 ************************************ 00:04:48.661 END TEST skip_rpc 00:04:48.661 ************************************ 00:04:48.661 16:06:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:48.661 16:06:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.661 16:06:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.661 16:06:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.661 ************************************ 00:04:48.661 START TEST skip_rpc_with_json 00:04:48.661 ************************************ 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57137 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57137 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57137 ']' 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.661 16:06:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.661 [2024-09-28 16:06:03.178504] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:48.661 [2024-09-28 16:06:03.178619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57137 ] 00:04:48.661 [2024-09-28 16:06:03.331376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.920 [2024-09-28 16:06:03.544930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.858 [2024-09-28 16:06:04.398049] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:49.858 request: 00:04:49.858 { 00:04:49.858 "trtype": "tcp", 00:04:49.858 "method": "nvmf_get_transports", 00:04:49.858 "req_id": 1 00:04:49.858 } 00:04:49.858 Got JSON-RPC error response 00:04:49.858 response: 00:04:49.858 { 00:04:49.858 "code": -19, 00:04:49.858 "message": "No such device" 00:04:49.858 } 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.858 [2024-09-28 16:06:04.406151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.858 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.117 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.117 16:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.117 { 00:04:50.117 "subsystems": [ 00:04:50.117 { 00:04:50.117 "subsystem": "fsdev", 00:04:50.117 "config": [ 00:04:50.117 { 00:04:50.117 "method": "fsdev_set_opts", 00:04:50.117 "params": { 00:04:50.117 "fsdev_io_pool_size": 65535, 00:04:50.117 "fsdev_io_cache_size": 256 00:04:50.117 } 00:04:50.117 } 00:04:50.117 ] 00:04:50.117 }, 00:04:50.117 { 00:04:50.117 "subsystem": "keyring", 00:04:50.117 "config": [] 00:04:50.117 }, 00:04:50.117 { 00:04:50.117 "subsystem": "iobuf", 00:04:50.117 "config": [ 00:04:50.117 { 00:04:50.117 "method": "iobuf_set_options", 00:04:50.117 "params": { 00:04:50.118 "small_pool_count": 8192, 00:04:50.118 "large_pool_count": 1024, 00:04:50.118 "small_bufsize": 8192, 00:04:50.118 "large_bufsize": 135168 00:04:50.118 } 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "sock", 00:04:50.118 "config": [ 00:04:50.118 { 00:04:50.118 "method": "sock_set_default_impl", 00:04:50.118 "params": { 00:04:50.118 "impl_name": "posix" 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "sock_impl_set_options", 00:04:50.118 "params": { 00:04:50.118 "impl_name": "ssl", 00:04:50.118 "recv_buf_size": 4096, 00:04:50.118 "send_buf_size": 4096, 00:04:50.118 "enable_recv_pipe": true, 00:04:50.118 "enable_quickack": false, 00:04:50.118 "enable_placement_id": 0, 00:04:50.118 "enable_zerocopy_send_server": true, 00:04:50.118 "enable_zerocopy_send_client": false, 00:04:50.118 "zerocopy_threshold": 0, 00:04:50.118 "tls_version": 0, 00:04:50.118 "enable_ktls": false 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "sock_impl_set_options", 00:04:50.118 "params": { 00:04:50.118 "impl_name": "posix", 00:04:50.118 "recv_buf_size": 2097152, 00:04:50.118 "send_buf_size": 2097152, 00:04:50.118 "enable_recv_pipe": true, 00:04:50.118 "enable_quickack": false, 00:04:50.118 "enable_placement_id": 0, 00:04:50.118 "enable_zerocopy_send_server": true, 00:04:50.118 "enable_zerocopy_send_client": false, 00:04:50.118 "zerocopy_threshold": 0, 00:04:50.118 "tls_version": 0, 00:04:50.118 "enable_ktls": false 00:04:50.118 } 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "vmd", 00:04:50.118 "config": [] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "accel", 00:04:50.118 "config": [ 00:04:50.118 { 00:04:50.118 "method": "accel_set_options", 00:04:50.118 "params": { 00:04:50.118 "small_cache_size": 128, 00:04:50.118 "large_cache_size": 16, 00:04:50.118 "task_count": 2048, 00:04:50.118 "sequence_count": 2048, 00:04:50.118 "buf_count": 2048 00:04:50.118 } 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "bdev", 00:04:50.118 "config": [ 00:04:50.118 { 00:04:50.118 "method": "bdev_set_options", 00:04:50.118 "params": { 00:04:50.118 "bdev_io_pool_size": 65535, 00:04:50.118 "bdev_io_cache_size": 256, 00:04:50.118 "bdev_auto_examine": true, 00:04:50.118 "iobuf_small_cache_size": 128, 00:04:50.118 "iobuf_large_cache_size": 16 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "bdev_raid_set_options", 00:04:50.118 "params": { 00:04:50.118 "process_window_size_kb": 1024, 00:04:50.118 "process_max_bandwidth_mb_sec": 0 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "bdev_iscsi_set_options", 00:04:50.118 "params": { 00:04:50.118 "timeout_sec": 30 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "bdev_nvme_set_options", 00:04:50.118 "params": { 00:04:50.118 "action_on_timeout": "none", 00:04:50.118 "timeout_us": 0, 00:04:50.118 "timeout_admin_us": 0, 00:04:50.118 "keep_alive_timeout_ms": 10000, 00:04:50.118 "arbitration_burst": 0, 00:04:50.118 "low_priority_weight": 0, 00:04:50.118 "medium_priority_weight": 0, 00:04:50.118 "high_priority_weight": 0, 00:04:50.118 "nvme_adminq_poll_period_us": 10000, 00:04:50.118 "nvme_ioq_poll_period_us": 0, 00:04:50.118 "io_queue_requests": 0, 00:04:50.118 "delay_cmd_submit": true, 00:04:50.118 "transport_retry_count": 4, 00:04:50.118 "bdev_retry_count": 3, 00:04:50.118 "transport_ack_timeout": 0, 00:04:50.118 "ctrlr_loss_timeout_sec": 0, 00:04:50.118 "reconnect_delay_sec": 0, 00:04:50.118 "fast_io_fail_timeout_sec": 0, 00:04:50.118 "disable_auto_failback": false, 00:04:50.118 "generate_uuids": false, 00:04:50.118 "transport_tos": 0, 00:04:50.118 "nvme_error_stat": false, 00:04:50.118 "rdma_srq_size": 0, 00:04:50.118 "io_path_stat": false, 00:04:50.118 "allow_accel_sequence": false, 00:04:50.118 "rdma_max_cq_size": 0, 00:04:50.118 "rdma_cm_event_timeout_ms": 0, 00:04:50.118 "dhchap_digests": [ 00:04:50.118 "sha256", 00:04:50.118 "sha384", 00:04:50.118 "sha512" 00:04:50.118 ], 00:04:50.118 "dhchap_dhgroups": [ 00:04:50.118 "null", 00:04:50.118 "ffdhe2048", 00:04:50.118 "ffdhe3072", 00:04:50.118 "ffdhe4096", 00:04:50.118 "ffdhe6144", 00:04:50.118 "ffdhe8192" 00:04:50.118 ] 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "bdev_nvme_set_hotplug", 00:04:50.118 "params": { 00:04:50.118 "period_us": 100000, 00:04:50.118 "enable": false 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "bdev_wait_for_examine" 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "scsi", 00:04:50.118 "config": null 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "scheduler", 00:04:50.118 "config": [ 00:04:50.118 { 00:04:50.118 "method": "framework_set_scheduler", 00:04:50.118 "params": { 00:04:50.118 "name": "static" 00:04:50.118 } 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "vhost_scsi", 00:04:50.118 "config": [] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "vhost_blk", 00:04:50.118 "config": [] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "ublk", 00:04:50.118 "config": [] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "nbd", 00:04:50.118 "config": [] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "nvmf", 00:04:50.118 "config": [ 00:04:50.118 { 00:04:50.118 "method": "nvmf_set_config", 00:04:50.118 "params": { 00:04:50.118 "discovery_filter": "match_any", 00:04:50.118 "admin_cmd_passthru": { 00:04:50.118 "identify_ctrlr": false 00:04:50.118 }, 00:04:50.118 "dhchap_digests": [ 00:04:50.118 "sha256", 00:04:50.118 "sha384", 00:04:50.118 "sha512" 00:04:50.118 ], 00:04:50.118 "dhchap_dhgroups": [ 00:04:50.118 "null", 00:04:50.118 "ffdhe2048", 00:04:50.118 "ffdhe3072", 00:04:50.118 "ffdhe4096", 00:04:50.118 "ffdhe6144", 00:04:50.118 "ffdhe8192" 00:04:50.118 ] 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "nvmf_set_max_subsystems", 00:04:50.118 "params": { 00:04:50.118 "max_subsystems": 1024 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "nvmf_set_crdt", 00:04:50.118 "params": { 00:04:50.118 "crdt1": 0, 00:04:50.118 "crdt2": 0, 00:04:50.118 "crdt3": 0 00:04:50.118 } 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "method": "nvmf_create_transport", 00:04:50.118 "params": { 00:04:50.118 "trtype": "TCP", 00:04:50.118 "max_queue_depth": 128, 00:04:50.118 "max_io_qpairs_per_ctrlr": 127, 00:04:50.118 "in_capsule_data_size": 4096, 00:04:50.118 "max_io_size": 131072, 00:04:50.118 "io_unit_size": 131072, 00:04:50.118 "max_aq_depth": 128, 00:04:50.118 "num_shared_buffers": 511, 00:04:50.118 "buf_cache_size": 4294967295, 00:04:50.118 "dif_insert_or_strip": false, 00:04:50.118 "zcopy": false, 00:04:50.118 "c2h_success": true, 00:04:50.118 "sock_priority": 0, 00:04:50.118 "abort_timeout_sec": 1, 00:04:50.118 "ack_timeout": 0, 00:04:50.118 "data_wr_pool_size": 0 00:04:50.118 } 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 }, 00:04:50.118 { 00:04:50.118 "subsystem": "iscsi", 00:04:50.118 "config": [ 00:04:50.118 { 00:04:50.118 "method": "iscsi_set_options", 00:04:50.118 "params": { 00:04:50.118 "node_base": "iqn.2016-06.io.spdk", 00:04:50.118 "max_sessions": 128, 00:04:50.118 "max_connections_per_session": 2, 00:04:50.118 "max_queue_depth": 64, 00:04:50.118 "default_time2wait": 2, 00:04:50.118 "default_time2retain": 20, 00:04:50.118 "first_burst_length": 8192, 00:04:50.118 "immediate_data": true, 00:04:50.118 "allow_duplicated_isid": false, 00:04:50.118 "error_recovery_level": 0, 00:04:50.118 "nop_timeout": 60, 00:04:50.118 "nop_in_interval": 30, 00:04:50.118 "disable_chap": false, 00:04:50.118 "require_chap": false, 00:04:50.118 "mutual_chap": false, 00:04:50.118 "chap_group": 0, 00:04:50.118 "max_large_datain_per_connection": 64, 00:04:50.118 "max_r2t_per_connection": 4, 00:04:50.118 "pdu_pool_size": 36864, 00:04:50.118 "immediate_data_pool_size": 16384, 00:04:50.118 "data_out_pool_size": 2048 00:04:50.118 } 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 } 00:04:50.118 ] 00:04:50.118 } 00:04:50.118 16:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:50.118 16:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57137 00:04:50.118 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57137 ']' 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57137 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57137 00:04:50.119 killing process with pid 57137 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57137' 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57137 00:04:50.119 16:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57137 00:04:52.656 16:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:52.656 16:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57193 00:04:52.656 16:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57193 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57193 ']' 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57193 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57193 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.935 killing process with pid 57193 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57193' 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57193 00:04:57.935 16:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57193 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.476 00:05:00.476 real 0m11.935s 00:05:00.476 user 0m11.208s 00:05:00.476 sys 0m0.993s 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.476 ************************************ 00:05:00.476 END TEST skip_rpc_with_json 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.476 ************************************ 00:05:00.476 16:06:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.476 16:06:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.476 16:06:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.476 16:06:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.476 ************************************ 00:05:00.476 START TEST skip_rpc_with_delay 00:05:00.476 ************************************ 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.476 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.736 [2024-09-28 16:06:15.190729] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.736 [2024-09-28 16:06:15.190857] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:00.736 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:00.736 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.736 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.736 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.736 00:05:00.736 real 0m0.177s 00:05:00.736 user 0m0.102s 00:05:00.736 sys 0m0.073s 00:05:00.736 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.736 16:06:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 ************************************ 00:05:00.736 END TEST skip_rpc_with_delay 00:05:00.736 ************************************ 00:05:00.736 16:06:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.736 16:06:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.736 16:06:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.736 16:06:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.736 16:06:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.736 16:06:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 ************************************ 00:05:00.736 START TEST exit_on_failed_rpc_init 00:05:00.736 ************************************ 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57327 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57327 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57327 ']' 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.736 16:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.996 [2024-09-28 16:06:15.466180] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:00.996 [2024-09-28 16:06:15.466351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57327 ] 00:05:00.996 [2024-09-28 16:06:15.632485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.256 [2024-09-28 16:06:15.878034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:02.194 16:06:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.454 [2024-09-28 16:06:16.949781] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:02.454 [2024-09-28 16:06:16.949936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57350 ] 00:05:02.714 [2024-09-28 16:06:17.139412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.714 [2024-09-28 16:06:17.390009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.714 [2024-09-28 16:06:17.390121] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:02.714 [2024-09-28 16:06:17.390135] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:02.714 [2024-09-28 16:06:17.390151] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57327 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57327 ']' 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57327 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.284 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57327 00:05:03.285 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.285 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.285 killing process with pid 57327 00:05:03.285 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57327' 00:05:03.285 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57327 00:05:03.285 16:06:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57327 00:05:05.824 00:05:05.824 real 0m5.157s 00:05:05.824 user 0m5.622s 00:05:05.824 sys 0m0.799s 00:05:05.824 16:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.824 16:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.824 ************************************ 00:05:05.824 END TEST exit_on_failed_rpc_init 00:05:05.824 ************************************ 00:05:06.084 16:06:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.084 00:05:06.084 real 0m25.224s 00:05:06.084 user 0m24.118s 00:05:06.084 sys 0m2.563s 00:05:06.084 16:06:20 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.084 16:06:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.084 ************************************ 00:05:06.084 END TEST skip_rpc 00:05:06.084 ************************************ 00:05:06.084 16:06:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:06.084 16:06:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.084 16:06:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.084 16:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.084 ************************************ 00:05:06.084 START TEST rpc_client 00:05:06.084 ************************************ 00:05:06.084 16:06:20 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:06.084 * Looking for test storage... 00:05:06.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:06.084 16:06:20 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.084 16:06:20 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.084 16:06:20 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.344 16:06:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.344 --rc genhtml_branch_coverage=1 00:05:06.344 --rc genhtml_function_coverage=1 00:05:06.344 --rc genhtml_legend=1 00:05:06.344 --rc geninfo_all_blocks=1 00:05:06.344 --rc geninfo_unexecuted_blocks=1 00:05:06.344 00:05:06.344 ' 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.344 --rc genhtml_branch_coverage=1 00:05:06.344 --rc genhtml_function_coverage=1 00:05:06.344 --rc genhtml_legend=1 00:05:06.344 --rc geninfo_all_blocks=1 00:05:06.344 --rc geninfo_unexecuted_blocks=1 00:05:06.344 00:05:06.344 ' 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.344 --rc genhtml_branch_coverage=1 00:05:06.344 --rc genhtml_function_coverage=1 00:05:06.344 --rc genhtml_legend=1 00:05:06.344 --rc geninfo_all_blocks=1 00:05:06.344 --rc geninfo_unexecuted_blocks=1 00:05:06.344 00:05:06.344 ' 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.344 --rc genhtml_branch_coverage=1 00:05:06.344 --rc genhtml_function_coverage=1 00:05:06.344 --rc genhtml_legend=1 00:05:06.344 --rc geninfo_all_blocks=1 00:05:06.344 --rc geninfo_unexecuted_blocks=1 00:05:06.344 00:05:06.344 ' 00:05:06.344 16:06:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:06.344 OK 00:05:06.344 16:06:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.344 00:05:06.344 real 0m0.284s 00:05:06.344 user 0m0.153s 00:05:06.344 sys 0m0.148s 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.344 16:06:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.344 ************************************ 00:05:06.344 END TEST rpc_client 00:05:06.344 ************************************ 00:05:06.344 16:06:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:06.344 16:06:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.344 16:06:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.344 16:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.344 ************************************ 00:05:06.344 START TEST json_config 00:05:06.344 ************************************ 00:05:06.344 16:06:20 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.605 16:06:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.605 16:06:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.605 16:06:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.605 16:06:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.605 16:06:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.605 16:06:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.605 16:06:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.605 16:06:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:06.605 16:06:21 json_config -- scripts/common.sh@345 -- # : 1 00:05:06.605 16:06:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.605 16:06:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.605 16:06:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:06.605 16:06:21 json_config -- scripts/common.sh@353 -- # local d=1 00:05:06.605 16:06:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.605 16:06:21 json_config -- scripts/common.sh@355 -- # echo 1 00:05:06.605 16:06:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.605 16:06:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@353 -- # local d=2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.605 16:06:21 json_config -- scripts/common.sh@355 -- # echo 2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.605 16:06:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.605 16:06:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.605 16:06:21 json_config -- scripts/common.sh@368 -- # return 0 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:06.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.605 --rc genhtml_branch_coverage=1 00:05:06.605 --rc genhtml_function_coverage=1 00:05:06.605 --rc genhtml_legend=1 00:05:06.605 --rc geninfo_all_blocks=1 00:05:06.605 --rc geninfo_unexecuted_blocks=1 00:05:06.605 00:05:06.605 ' 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:06.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.605 --rc genhtml_branch_coverage=1 00:05:06.605 --rc genhtml_function_coverage=1 00:05:06.605 --rc genhtml_legend=1 00:05:06.605 --rc geninfo_all_blocks=1 00:05:06.605 --rc geninfo_unexecuted_blocks=1 00:05:06.605 00:05:06.605 ' 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:06.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.605 --rc genhtml_branch_coverage=1 00:05:06.605 --rc genhtml_function_coverage=1 00:05:06.605 --rc genhtml_legend=1 00:05:06.605 --rc geninfo_all_blocks=1 00:05:06.605 --rc geninfo_unexecuted_blocks=1 00:05:06.605 00:05:06.605 ' 00:05:06.605 16:06:21 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:06.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.605 --rc genhtml_branch_coverage=1 00:05:06.605 --rc genhtml_function_coverage=1 00:05:06.605 --rc genhtml_legend=1 00:05:06.605 --rc geninfo_all_blocks=1 00:05:06.605 --rc geninfo_unexecuted_blocks=1 00:05:06.605 00:05:06.605 ' 00:05:06.605 16:06:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b0fa62cc-0be9-4e6c-a497-5229b0bef527 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b0fa62cc-0be9-4e6c-a497-5229b0bef527 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.605 16:06:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.605 16:06:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.605 16:06:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.605 16:06:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.605 16:06:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.605 16:06:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.605 16:06:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.605 16:06:21 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.605 16:06:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@51 -- # : 0 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.605 16:06:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.606 16:06:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.606 16:06:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.606 16:06:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.606 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.606 16:06:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.606 16:06:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.606 16:06:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.606 16:06:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:06.606 16:06:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.606 16:06:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.606 16:06:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.606 16:06:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.606 16:06:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:06.606 WARNING: No tests are enabled so not running JSON configuration tests 00:05:06.606 16:06:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:06.606 00:05:06.606 real 0m0.224s 00:05:06.606 user 0m0.147s 00:05:06.606 sys 0m0.083s 00:05:06.606 16:06:21 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.606 16:06:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.606 ************************************ 00:05:06.606 END TEST json_config 00:05:06.606 ************************************ 00:05:06.606 16:06:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:06.606 16:06:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.606 16:06:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.606 16:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.606 ************************************ 00:05:06.606 START TEST json_config_extra_key 00:05:06.606 ************************************ 00:05:06.606 16:06:21 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:06.868 16:06:21 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:06.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.869 --rc genhtml_branch_coverage=1 00:05:06.869 --rc genhtml_function_coverage=1 00:05:06.869 --rc genhtml_legend=1 00:05:06.869 --rc geninfo_all_blocks=1 00:05:06.869 --rc geninfo_unexecuted_blocks=1 00:05:06.869 00:05:06.869 ' 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:06.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.869 --rc genhtml_branch_coverage=1 00:05:06.869 --rc genhtml_function_coverage=1 00:05:06.869 --rc genhtml_legend=1 00:05:06.869 --rc geninfo_all_blocks=1 00:05:06.869 --rc geninfo_unexecuted_blocks=1 00:05:06.869 00:05:06.869 ' 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:06.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.869 --rc genhtml_branch_coverage=1 00:05:06.869 --rc genhtml_function_coverage=1 00:05:06.869 --rc genhtml_legend=1 00:05:06.869 --rc geninfo_all_blocks=1 00:05:06.869 --rc geninfo_unexecuted_blocks=1 00:05:06.869 00:05:06.869 ' 00:05:06.869 16:06:21 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:06.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.869 --rc genhtml_branch_coverage=1 00:05:06.869 --rc genhtml_function_coverage=1 00:05:06.869 --rc genhtml_legend=1 00:05:06.869 --rc geninfo_all_blocks=1 00:05:06.869 --rc geninfo_unexecuted_blocks=1 00:05:06.869 00:05:06.869 ' 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b0fa62cc-0be9-4e6c-a497-5229b0bef527 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b0fa62cc-0be9-4e6c-a497-5229b0bef527 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.869 16:06:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.869 16:06:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.869 16:06:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.869 16:06:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.869 16:06:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:06.869 16:06:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.869 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.869 16:06:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.869 INFO: launching applications... 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:06.869 16:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:06.869 16:06:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:06.869 16:06:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:06.869 16:06:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.869 16:06:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.869 16:06:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.870 16:06:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.870 16:06:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.870 16:06:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57566 00:05:06.870 Waiting for target to run... 00:05:06.870 16:06:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.870 16:06:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57566 /var/tmp/spdk_tgt.sock 00:05:06.870 16:06:21 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57566 ']' 00:05:06.870 16:06:21 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.870 16:06:21 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.870 16:06:21 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.870 16:06:21 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:06.870 16:06:21 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.870 16:06:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.152 [2024-09-28 16:06:21.604880] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:07.152 [2024-09-28 16:06:21.605028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57566 ] 00:05:07.743 [2024-09-28 16:06:22.159726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.743 [2024-09-28 16:06:22.380386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.682 16:06:23 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.682 16:06:23 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:08.682 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:08.682 INFO: shutting down applications... 00:05:08.682 16:06:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:08.682 16:06:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57566 ]] 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57566 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57566 00:05:08.682 16:06:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.942 16:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.942 16:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.942 16:06:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57566 00:05:08.942 16:06:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.511 16:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.511 16:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.511 16:06:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57566 00:05:09.511 16:06:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.081 16:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.081 16:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.081 16:06:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57566 00:05:10.081 16:06:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.650 16:06:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.650 16:06:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.650 16:06:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57566 00:05:10.650 16:06:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.217 16:06:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.217 16:06:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.217 16:06:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57566 00:05:11.217 16:06:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.475 16:06:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.475 16:06:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.475 16:06:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57566 00:05:11.475 16:06:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.475 16:06:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.475 16:06:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.475 SPDK target shutdown done 00:05:11.475 16:06:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.475 Success 00:05:11.475 16:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.475 00:05:11.475 real 0m4.866s 00:05:11.475 user 0m4.375s 00:05:11.475 sys 0m0.810s 00:05:11.475 16:06:26 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.475 16:06:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.475 ************************************ 00:05:11.475 END TEST json_config_extra_key 00:05:11.475 ************************************ 00:05:11.733 16:06:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.733 16:06:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.733 16:06:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.733 16:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.733 ************************************ 00:05:11.733 START TEST alias_rpc 00:05:11.734 ************************************ 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.734 * Looking for test storage... 00:05:11.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.734 16:06:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.734 --rc genhtml_branch_coverage=1 00:05:11.734 --rc genhtml_function_coverage=1 00:05:11.734 --rc genhtml_legend=1 00:05:11.734 --rc geninfo_all_blocks=1 00:05:11.734 --rc geninfo_unexecuted_blocks=1 00:05:11.734 00:05:11.734 ' 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.734 --rc genhtml_branch_coverage=1 00:05:11.734 --rc genhtml_function_coverage=1 00:05:11.734 --rc genhtml_legend=1 00:05:11.734 --rc geninfo_all_blocks=1 00:05:11.734 --rc geninfo_unexecuted_blocks=1 00:05:11.734 00:05:11.734 ' 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.734 --rc genhtml_branch_coverage=1 00:05:11.734 --rc genhtml_function_coverage=1 00:05:11.734 --rc genhtml_legend=1 00:05:11.734 --rc geninfo_all_blocks=1 00:05:11.734 --rc geninfo_unexecuted_blocks=1 00:05:11.734 00:05:11.734 ' 00:05:11.734 16:06:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.734 --rc genhtml_branch_coverage=1 00:05:11.734 --rc genhtml_function_coverage=1 00:05:11.734 --rc genhtml_legend=1 00:05:11.734 --rc geninfo_all_blocks=1 00:05:11.734 --rc geninfo_unexecuted_blocks=1 00:05:11.734 00:05:11.734 ' 00:05:11.734 16:06:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.993 16:06:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57679 00:05:11.993 16:06:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.993 16:06:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57679 00:05:11.993 16:06:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57679 ']' 00:05:11.993 16:06:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.993 16:06:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.993 16:06:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.993 16:06:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.993 16:06:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.993 [2024-09-28 16:06:26.516844] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:11.993 [2024-09-28 16:06:26.516965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57679 ] 00:05:12.251 [2024-09-28 16:06:26.679291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.251 [2024-09-28 16:06:26.924128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.630 16:06:27 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.630 16:06:27 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.630 16:06:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:13.630 16:06:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57679 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57679 ']' 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57679 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57679 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.630 killing process with pid 57679 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57679' 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@969 -- # kill 57679 00:05:13.630 16:06:28 alias_rpc -- common/autotest_common.sh@974 -- # wait 57679 00:05:16.165 00:05:16.165 real 0m4.607s 00:05:16.165 user 0m4.391s 00:05:16.165 sys 0m0.749s 00:05:16.165 16:06:30 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.165 16:06:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.165 ************************************ 00:05:16.165 END TEST alias_rpc 00:05:16.165 ************************************ 00:05:16.424 16:06:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:16.424 16:06:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:16.424 16:06:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.424 16:06:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.424 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.424 ************************************ 00:05:16.424 START TEST spdkcli_tcp 00:05:16.424 ************************************ 00:05:16.424 16:06:30 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:16.424 * Looking for test storage... 00:05:16.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:16.424 16:06:30 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.424 16:06:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.424 16:06:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.424 16:06:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:16.424 16:06:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.425 16:06:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.425 --rc genhtml_branch_coverage=1 00:05:16.425 --rc genhtml_function_coverage=1 00:05:16.425 --rc genhtml_legend=1 00:05:16.425 --rc geninfo_all_blocks=1 00:05:16.425 --rc geninfo_unexecuted_blocks=1 00:05:16.425 00:05:16.425 ' 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.425 --rc genhtml_branch_coverage=1 00:05:16.425 --rc genhtml_function_coverage=1 00:05:16.425 --rc genhtml_legend=1 00:05:16.425 --rc geninfo_all_blocks=1 00:05:16.425 --rc geninfo_unexecuted_blocks=1 00:05:16.425 00:05:16.425 ' 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.425 --rc genhtml_branch_coverage=1 00:05:16.425 --rc genhtml_function_coverage=1 00:05:16.425 --rc genhtml_legend=1 00:05:16.425 --rc geninfo_all_blocks=1 00:05:16.425 --rc geninfo_unexecuted_blocks=1 00:05:16.425 00:05:16.425 ' 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.425 --rc genhtml_branch_coverage=1 00:05:16.425 --rc genhtml_function_coverage=1 00:05:16.425 --rc genhtml_legend=1 00:05:16.425 --rc geninfo_all_blocks=1 00:05:16.425 --rc geninfo_unexecuted_blocks=1 00:05:16.425 00:05:16.425 ' 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57792 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:16.425 16:06:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57792 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57792 ']' 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.425 16:06:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.684 16:06:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.684 16:06:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.684 16:06:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.684 [2024-09-28 16:06:31.211008] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:16.684 [2024-09-28 16:06:31.211179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57792 ] 00:05:16.942 [2024-09-28 16:06:31.379443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.942 [2024-09-28 16:06:31.620066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.942 [2024-09-28 16:06:31.620107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.321 16:06:32 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.321 16:06:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:18.321 16:06:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57814 00:05:18.322 16:06:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.322 16:06:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.322 [ 00:05:18.322 "bdev_malloc_delete", 00:05:18.322 "bdev_malloc_create", 00:05:18.322 "bdev_null_resize", 00:05:18.322 "bdev_null_delete", 00:05:18.322 "bdev_null_create", 00:05:18.322 "bdev_nvme_cuse_unregister", 00:05:18.322 "bdev_nvme_cuse_register", 00:05:18.322 "bdev_opal_new_user", 00:05:18.322 "bdev_opal_set_lock_state", 00:05:18.322 "bdev_opal_delete", 00:05:18.322 "bdev_opal_get_info", 00:05:18.322 "bdev_opal_create", 00:05:18.322 "bdev_nvme_opal_revert", 00:05:18.322 "bdev_nvme_opal_init", 00:05:18.322 "bdev_nvme_send_cmd", 00:05:18.322 "bdev_nvme_set_keys", 00:05:18.322 "bdev_nvme_get_path_iostat", 00:05:18.322 "bdev_nvme_get_mdns_discovery_info", 00:05:18.322 "bdev_nvme_stop_mdns_discovery", 00:05:18.322 "bdev_nvme_start_mdns_discovery", 00:05:18.322 "bdev_nvme_set_multipath_policy", 00:05:18.322 "bdev_nvme_set_preferred_path", 00:05:18.322 "bdev_nvme_get_io_paths", 00:05:18.322 "bdev_nvme_remove_error_injection", 00:05:18.322 "bdev_nvme_add_error_injection", 00:05:18.322 "bdev_nvme_get_discovery_info", 00:05:18.322 "bdev_nvme_stop_discovery", 00:05:18.322 "bdev_nvme_start_discovery", 00:05:18.322 "bdev_nvme_get_controller_health_info", 00:05:18.322 "bdev_nvme_disable_controller", 00:05:18.322 "bdev_nvme_enable_controller", 00:05:18.322 "bdev_nvme_reset_controller", 00:05:18.322 "bdev_nvme_get_transport_statistics", 00:05:18.322 "bdev_nvme_apply_firmware", 00:05:18.322 "bdev_nvme_detach_controller", 00:05:18.322 "bdev_nvme_get_controllers", 00:05:18.322 "bdev_nvme_attach_controller", 00:05:18.322 "bdev_nvme_set_hotplug", 00:05:18.322 "bdev_nvme_set_options", 00:05:18.322 "bdev_passthru_delete", 00:05:18.322 "bdev_passthru_create", 00:05:18.322 "bdev_lvol_set_parent_bdev", 00:05:18.322 "bdev_lvol_set_parent", 00:05:18.322 "bdev_lvol_check_shallow_copy", 00:05:18.322 "bdev_lvol_start_shallow_copy", 00:05:18.322 "bdev_lvol_grow_lvstore", 00:05:18.322 "bdev_lvol_get_lvols", 00:05:18.322 "bdev_lvol_get_lvstores", 00:05:18.322 "bdev_lvol_delete", 00:05:18.322 "bdev_lvol_set_read_only", 00:05:18.322 "bdev_lvol_resize", 00:05:18.322 "bdev_lvol_decouple_parent", 00:05:18.322 "bdev_lvol_inflate", 00:05:18.322 "bdev_lvol_rename", 00:05:18.322 "bdev_lvol_clone_bdev", 00:05:18.322 "bdev_lvol_clone", 00:05:18.322 "bdev_lvol_snapshot", 00:05:18.322 "bdev_lvol_create", 00:05:18.322 "bdev_lvol_delete_lvstore", 00:05:18.322 "bdev_lvol_rename_lvstore", 00:05:18.322 "bdev_lvol_create_lvstore", 00:05:18.322 "bdev_raid_set_options", 00:05:18.322 "bdev_raid_remove_base_bdev", 00:05:18.322 "bdev_raid_add_base_bdev", 00:05:18.322 "bdev_raid_delete", 00:05:18.322 "bdev_raid_create", 00:05:18.322 "bdev_raid_get_bdevs", 00:05:18.322 "bdev_error_inject_error", 00:05:18.322 "bdev_error_delete", 00:05:18.322 "bdev_error_create", 00:05:18.322 "bdev_split_delete", 00:05:18.322 "bdev_split_create", 00:05:18.322 "bdev_delay_delete", 00:05:18.322 "bdev_delay_create", 00:05:18.322 "bdev_delay_update_latency", 00:05:18.322 "bdev_zone_block_delete", 00:05:18.322 "bdev_zone_block_create", 00:05:18.322 "blobfs_create", 00:05:18.322 "blobfs_detect", 00:05:18.322 "blobfs_set_cache_size", 00:05:18.322 "bdev_aio_delete", 00:05:18.322 "bdev_aio_rescan", 00:05:18.322 "bdev_aio_create", 00:05:18.322 "bdev_ftl_set_property", 00:05:18.322 "bdev_ftl_get_properties", 00:05:18.322 "bdev_ftl_get_stats", 00:05:18.322 "bdev_ftl_unmap", 00:05:18.322 "bdev_ftl_unload", 00:05:18.322 "bdev_ftl_delete", 00:05:18.322 "bdev_ftl_load", 00:05:18.322 "bdev_ftl_create", 00:05:18.322 "bdev_virtio_attach_controller", 00:05:18.322 "bdev_virtio_scsi_get_devices", 00:05:18.322 "bdev_virtio_detach_controller", 00:05:18.322 "bdev_virtio_blk_set_hotplug", 00:05:18.322 "bdev_iscsi_delete", 00:05:18.322 "bdev_iscsi_create", 00:05:18.322 "bdev_iscsi_set_options", 00:05:18.322 "accel_error_inject_error", 00:05:18.322 "ioat_scan_accel_module", 00:05:18.322 "dsa_scan_accel_module", 00:05:18.322 "iaa_scan_accel_module", 00:05:18.322 "keyring_file_remove_key", 00:05:18.322 "keyring_file_add_key", 00:05:18.322 "keyring_linux_set_options", 00:05:18.322 "fsdev_aio_delete", 00:05:18.322 "fsdev_aio_create", 00:05:18.322 "iscsi_get_histogram", 00:05:18.322 "iscsi_enable_histogram", 00:05:18.322 "iscsi_set_options", 00:05:18.322 "iscsi_get_auth_groups", 00:05:18.322 "iscsi_auth_group_remove_secret", 00:05:18.322 "iscsi_auth_group_add_secret", 00:05:18.322 "iscsi_delete_auth_group", 00:05:18.322 "iscsi_create_auth_group", 00:05:18.322 "iscsi_set_discovery_auth", 00:05:18.322 "iscsi_get_options", 00:05:18.322 "iscsi_target_node_request_logout", 00:05:18.322 "iscsi_target_node_set_redirect", 00:05:18.322 "iscsi_target_node_set_auth", 00:05:18.322 "iscsi_target_node_add_lun", 00:05:18.322 "iscsi_get_stats", 00:05:18.322 "iscsi_get_connections", 00:05:18.322 "iscsi_portal_group_set_auth", 00:05:18.322 "iscsi_start_portal_group", 00:05:18.322 "iscsi_delete_portal_group", 00:05:18.322 "iscsi_create_portal_group", 00:05:18.322 "iscsi_get_portal_groups", 00:05:18.322 "iscsi_delete_target_node", 00:05:18.322 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.322 "iscsi_target_node_add_pg_ig_maps", 00:05:18.322 "iscsi_create_target_node", 00:05:18.322 "iscsi_get_target_nodes", 00:05:18.322 "iscsi_delete_initiator_group", 00:05:18.322 "iscsi_initiator_group_remove_initiators", 00:05:18.322 "iscsi_initiator_group_add_initiators", 00:05:18.322 "iscsi_create_initiator_group", 00:05:18.322 "iscsi_get_initiator_groups", 00:05:18.322 "nvmf_set_crdt", 00:05:18.322 "nvmf_set_config", 00:05:18.322 "nvmf_set_max_subsystems", 00:05:18.322 "nvmf_stop_mdns_prr", 00:05:18.322 "nvmf_publish_mdns_prr", 00:05:18.322 "nvmf_subsystem_get_listeners", 00:05:18.322 "nvmf_subsystem_get_qpairs", 00:05:18.322 "nvmf_subsystem_get_controllers", 00:05:18.322 "nvmf_get_stats", 00:05:18.322 "nvmf_get_transports", 00:05:18.322 "nvmf_create_transport", 00:05:18.322 "nvmf_get_targets", 00:05:18.322 "nvmf_delete_target", 00:05:18.322 "nvmf_create_target", 00:05:18.322 "nvmf_subsystem_allow_any_host", 00:05:18.322 "nvmf_subsystem_set_keys", 00:05:18.322 "nvmf_subsystem_remove_host", 00:05:18.322 "nvmf_subsystem_add_host", 00:05:18.322 "nvmf_ns_remove_host", 00:05:18.322 "nvmf_ns_add_host", 00:05:18.322 "nvmf_subsystem_remove_ns", 00:05:18.322 "nvmf_subsystem_set_ns_ana_group", 00:05:18.322 "nvmf_subsystem_add_ns", 00:05:18.322 "nvmf_subsystem_listener_set_ana_state", 00:05:18.322 "nvmf_discovery_get_referrals", 00:05:18.322 "nvmf_discovery_remove_referral", 00:05:18.322 "nvmf_discovery_add_referral", 00:05:18.322 "nvmf_subsystem_remove_listener", 00:05:18.322 "nvmf_subsystem_add_listener", 00:05:18.322 "nvmf_delete_subsystem", 00:05:18.322 "nvmf_create_subsystem", 00:05:18.322 "nvmf_get_subsystems", 00:05:18.322 "env_dpdk_get_mem_stats", 00:05:18.322 "nbd_get_disks", 00:05:18.322 "nbd_stop_disk", 00:05:18.322 "nbd_start_disk", 00:05:18.322 "ublk_recover_disk", 00:05:18.322 "ublk_get_disks", 00:05:18.322 "ublk_stop_disk", 00:05:18.322 "ublk_start_disk", 00:05:18.322 "ublk_destroy_target", 00:05:18.322 "ublk_create_target", 00:05:18.322 "virtio_blk_create_transport", 00:05:18.322 "virtio_blk_get_transports", 00:05:18.322 "vhost_controller_set_coalescing", 00:05:18.322 "vhost_get_controllers", 00:05:18.322 "vhost_delete_controller", 00:05:18.322 "vhost_create_blk_controller", 00:05:18.322 "vhost_scsi_controller_remove_target", 00:05:18.322 "vhost_scsi_controller_add_target", 00:05:18.322 "vhost_start_scsi_controller", 00:05:18.322 "vhost_create_scsi_controller", 00:05:18.322 "thread_set_cpumask", 00:05:18.322 "scheduler_set_options", 00:05:18.322 "framework_get_governor", 00:05:18.322 "framework_get_scheduler", 00:05:18.322 "framework_set_scheduler", 00:05:18.322 "framework_get_reactors", 00:05:18.322 "thread_get_io_channels", 00:05:18.322 "thread_get_pollers", 00:05:18.322 "thread_get_stats", 00:05:18.322 "framework_monitor_context_switch", 00:05:18.322 "spdk_kill_instance", 00:05:18.322 "log_enable_timestamps", 00:05:18.322 "log_get_flags", 00:05:18.322 "log_clear_flag", 00:05:18.322 "log_set_flag", 00:05:18.322 "log_get_level", 00:05:18.322 "log_set_level", 00:05:18.322 "log_get_print_level", 00:05:18.322 "log_set_print_level", 00:05:18.322 "framework_enable_cpumask_locks", 00:05:18.322 "framework_disable_cpumask_locks", 00:05:18.322 "framework_wait_init", 00:05:18.322 "framework_start_init", 00:05:18.322 "scsi_get_devices", 00:05:18.322 "bdev_get_histogram", 00:05:18.322 "bdev_enable_histogram", 00:05:18.322 "bdev_set_qos_limit", 00:05:18.322 "bdev_set_qd_sampling_period", 00:05:18.322 "bdev_get_bdevs", 00:05:18.322 "bdev_reset_iostat", 00:05:18.322 "bdev_get_iostat", 00:05:18.322 "bdev_examine", 00:05:18.322 "bdev_wait_for_examine", 00:05:18.322 "bdev_set_options", 00:05:18.322 "accel_get_stats", 00:05:18.322 "accel_set_options", 00:05:18.322 "accel_set_driver", 00:05:18.322 "accel_crypto_key_destroy", 00:05:18.322 "accel_crypto_keys_get", 00:05:18.322 "accel_crypto_key_create", 00:05:18.322 "accel_assign_opc", 00:05:18.322 "accel_get_module_info", 00:05:18.322 "accel_get_opc_assignments", 00:05:18.322 "vmd_rescan", 00:05:18.322 "vmd_remove_device", 00:05:18.322 "vmd_enable", 00:05:18.322 "sock_get_default_impl", 00:05:18.322 "sock_set_default_impl", 00:05:18.322 "sock_impl_set_options", 00:05:18.322 "sock_impl_get_options", 00:05:18.322 "iobuf_get_stats", 00:05:18.322 "iobuf_set_options", 00:05:18.322 "keyring_get_keys", 00:05:18.322 "framework_get_pci_devices", 00:05:18.322 "framework_get_config", 00:05:18.322 "framework_get_subsystems", 00:05:18.322 "fsdev_set_opts", 00:05:18.322 "fsdev_get_opts", 00:05:18.322 "trace_get_info", 00:05:18.322 "trace_get_tpoint_group_mask", 00:05:18.322 "trace_disable_tpoint_group", 00:05:18.322 "trace_enable_tpoint_group", 00:05:18.322 "trace_clear_tpoint_mask", 00:05:18.322 "trace_set_tpoint_mask", 00:05:18.322 "notify_get_notifications", 00:05:18.322 "notify_get_types", 00:05:18.322 "spdk_get_version", 00:05:18.322 "rpc_get_methods" 00:05:18.322 ] 00:05:18.322 16:06:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.322 16:06:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.322 16:06:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57792 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57792 ']' 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57792 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57792 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57792' 00:05:18.322 killing process with pid 57792 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57792 00:05:18.322 16:06:32 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57792 00:05:20.859 ************************************ 00:05:20.859 END TEST spdkcli_tcp 00:05:20.859 ************************************ 00:05:20.859 00:05:20.859 real 0m4.624s 00:05:20.859 user 0m7.810s 00:05:20.859 sys 0m0.833s 00:05:20.859 16:06:35 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.859 16:06:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.119 16:06:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.119 16:06:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.119 16:06:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.119 16:06:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.119 ************************************ 00:05:21.119 START TEST dpdk_mem_utility 00:05:21.119 ************************************ 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.119 * Looking for test storage... 00:05:21.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.119 16:06:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.119 --rc genhtml_branch_coverage=1 00:05:21.119 --rc genhtml_function_coverage=1 00:05:21.119 --rc genhtml_legend=1 00:05:21.119 --rc geninfo_all_blocks=1 00:05:21.119 --rc geninfo_unexecuted_blocks=1 00:05:21.119 00:05:21.119 ' 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.119 --rc genhtml_branch_coverage=1 00:05:21.119 --rc genhtml_function_coverage=1 00:05:21.119 --rc genhtml_legend=1 00:05:21.119 --rc geninfo_all_blocks=1 00:05:21.119 --rc geninfo_unexecuted_blocks=1 00:05:21.119 00:05:21.119 ' 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.119 --rc genhtml_branch_coverage=1 00:05:21.119 --rc genhtml_function_coverage=1 00:05:21.119 --rc genhtml_legend=1 00:05:21.119 --rc geninfo_all_blocks=1 00:05:21.119 --rc geninfo_unexecuted_blocks=1 00:05:21.119 00:05:21.119 ' 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.119 --rc genhtml_branch_coverage=1 00:05:21.119 --rc genhtml_function_coverage=1 00:05:21.119 --rc genhtml_legend=1 00:05:21.119 --rc geninfo_all_blocks=1 00:05:21.119 --rc geninfo_unexecuted_blocks=1 00:05:21.119 00:05:21.119 ' 00:05:21.119 16:06:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:21.119 16:06:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57919 00:05:21.119 16:06:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.119 16:06:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57919 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57919 ']' 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.119 16:06:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.387 [2024-09-28 16:06:35.888928] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:21.387 [2024-09-28 16:06:35.889056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57919 ] 00:05:21.387 [2024-09-28 16:06:36.054563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.648 [2024-09-28 16:06:36.293644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.585 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.585 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:22.585 16:06:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.585 16:06:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.585 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.585 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.846 { 00:05:22.846 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.846 } 00:05:22.846 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.846 16:06:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:22.846 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:22.846 1 heaps totaling size 866.000000 MiB 00:05:22.846 size: 866.000000 MiB heap id: 0 00:05:22.846 end heaps---------- 00:05:22.846 9 mempools totaling size 642.649841 MiB 00:05:22.846 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:22.846 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:22.846 size: 92.545471 MiB name: bdev_io_57919 00:05:22.846 size: 51.011292 MiB name: evtpool_57919 00:05:22.846 size: 50.003479 MiB name: msgpool_57919 00:05:22.846 size: 36.509338 MiB name: fsdev_io_57919 00:05:22.846 size: 21.763794 MiB name: PDU_Pool 00:05:22.846 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:22.846 size: 0.026123 MiB name: Session_Pool 00:05:22.846 end mempools------- 00:05:22.846 6 memzones totaling size 4.142822 MiB 00:05:22.846 size: 1.000366 MiB name: RG_ring_0_57919 00:05:22.846 size: 1.000366 MiB name: RG_ring_1_57919 00:05:22.846 size: 1.000366 MiB name: RG_ring_4_57919 00:05:22.846 size: 1.000366 MiB name: RG_ring_5_57919 00:05:22.846 size: 0.125366 MiB name: RG_ring_2_57919 00:05:22.846 size: 0.015991 MiB name: RG_ring_3_57919 00:05:22.846 end memzones------- 00:05:22.846 16:06:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:22.846 heap id: 0 total size: 866.000000 MiB number of busy elements: 314 number of free elements: 19 00:05:22.846 list of free elements. size: 19.913818 MiB 00:05:22.846 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:22.846 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:22.846 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:22.846 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:22.846 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:22.846 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:22.846 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:22.846 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:22.846 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:22.846 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:22.846 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:22.846 element at address: 0x200000200000 with size: 0.831909 MiB 00:05:22.846 element at address: 0x20001de00000 with size: 0.562195 MiB 00:05:22.846 element at address: 0x200003e00000 with size: 0.490417 MiB 00:05:22.846 element at address: 0x20001c000000 with size: 0.488464 MiB 00:05:22.846 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:22.846 element at address: 0x200015e00000 with size: 0.443481 MiB 00:05:22.846 element at address: 0x20002b200000 with size: 0.390442 MiB 00:05:22.846 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:22.846 list of standard malloc elements. size: 199.287476 MiB 00:05:22.846 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:22.846 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:22.846 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:22.846 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:22.846 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:22.846 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:22.846 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:22.846 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:22.846 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:22.846 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:22.846 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:22.846 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:22.846 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003aff700 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:22.846 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:22.847 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:22.847 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b264040 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:22.848 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:22.848 list of memzone associated elements. size: 646.798706 MiB 00:05:22.848 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:22.848 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:22.848 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:22.848 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:22.848 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:22.848 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57919_0 00:05:22.848 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:22.848 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57919_0 00:05:22.848 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:22.848 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57919_0 00:05:22.848 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:22.848 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57919_0 00:05:22.848 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:22.848 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:22.848 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:22.848 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:22.848 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:22.848 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57919 00:05:22.848 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:22.848 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57919 00:05:22.848 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:22.848 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57919 00:05:22.848 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:22.848 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:22.848 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:22.848 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:22.848 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:22.848 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:22.848 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:22.848 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:22.848 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:22.848 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57919 00:05:22.848 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:22.848 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57919 00:05:22.848 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:22.848 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57919 00:05:22.848 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:22.848 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57919 00:05:22.848 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:05:22.848 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57919 00:05:22.848 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:05:22.848 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57919 00:05:22.848 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:22.848 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:22.848 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:22.848 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:22.848 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:22.848 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:22.848 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:05:22.848 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57919 00:05:22.848 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:22.848 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:22.848 element at address: 0x20002b264140 with size: 0.023804 MiB 00:05:22.848 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:22.848 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:05:22.848 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57919 00:05:22.848 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:05:22.848 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:22.848 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:05:22.848 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57919 00:05:22.848 element at address: 0x200003aff800 with size: 0.000366 MiB 00:05:22.848 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57919 00:05:22.848 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:22.848 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57919 00:05:22.848 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:05:22.848 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:22.848 16:06:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:22.848 16:06:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57919 00:05:22.848 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57919 ']' 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57919 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57919 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.849 killing process with pid 57919 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57919' 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57919 00:05:22.849 16:06:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57919 00:05:25.457 00:05:25.457 real 0m4.504s 00:05:25.457 user 0m4.191s 00:05:25.457 sys 0m0.773s 00:05:25.457 16:06:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.457 16:06:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 ************************************ 00:05:25.457 END TEST dpdk_mem_utility 00:05:25.457 ************************************ 00:05:25.457 16:06:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.457 16:06:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.457 16:06:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.457 16:06:40 -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 ************************************ 00:05:25.457 START TEST event 00:05:25.457 ************************************ 00:05:25.457 16:06:40 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.717 * Looking for test storage... 00:05:25.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:25.717 16:06:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.717 16:06:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.717 16:06:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.717 16:06:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.717 16:06:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.717 16:06:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.717 16:06:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.717 16:06:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.717 16:06:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.717 16:06:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.717 16:06:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.717 16:06:40 event -- scripts/common.sh@344 -- # case "$op" in 00:05:25.717 16:06:40 event -- scripts/common.sh@345 -- # : 1 00:05:25.717 16:06:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.717 16:06:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.717 16:06:40 event -- scripts/common.sh@365 -- # decimal 1 00:05:25.717 16:06:40 event -- scripts/common.sh@353 -- # local d=1 00:05:25.717 16:06:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.717 16:06:40 event -- scripts/common.sh@355 -- # echo 1 00:05:25.717 16:06:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.717 16:06:40 event -- scripts/common.sh@366 -- # decimal 2 00:05:25.717 16:06:40 event -- scripts/common.sh@353 -- # local d=2 00:05:25.717 16:06:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.717 16:06:40 event -- scripts/common.sh@355 -- # echo 2 00:05:25.717 16:06:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.717 16:06:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.717 16:06:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.717 16:06:40 event -- scripts/common.sh@368 -- # return 0 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:25.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.717 --rc genhtml_branch_coverage=1 00:05:25.717 --rc genhtml_function_coverage=1 00:05:25.717 --rc genhtml_legend=1 00:05:25.717 --rc geninfo_all_blocks=1 00:05:25.717 --rc geninfo_unexecuted_blocks=1 00:05:25.717 00:05:25.717 ' 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:25.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.717 --rc genhtml_branch_coverage=1 00:05:25.717 --rc genhtml_function_coverage=1 00:05:25.717 --rc genhtml_legend=1 00:05:25.717 --rc geninfo_all_blocks=1 00:05:25.717 --rc geninfo_unexecuted_blocks=1 00:05:25.717 00:05:25.717 ' 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:25.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.717 --rc genhtml_branch_coverage=1 00:05:25.717 --rc genhtml_function_coverage=1 00:05:25.717 --rc genhtml_legend=1 00:05:25.717 --rc geninfo_all_blocks=1 00:05:25.717 --rc geninfo_unexecuted_blocks=1 00:05:25.717 00:05:25.717 ' 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:25.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.717 --rc genhtml_branch_coverage=1 00:05:25.717 --rc genhtml_function_coverage=1 00:05:25.717 --rc genhtml_legend=1 00:05:25.717 --rc geninfo_all_blocks=1 00:05:25.717 --rc geninfo_unexecuted_blocks=1 00:05:25.717 00:05:25.717 ' 00:05:25.717 16:06:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.717 16:06:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.717 16:06:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:25.717 16:06:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.717 16:06:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.717 ************************************ 00:05:25.717 START TEST event_perf 00:05:25.717 ************************************ 00:05:25.717 16:06:40 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.977 Running I/O for 1 seconds...[2024-09-28 16:06:40.421535] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:25.977 [2024-09-28 16:06:40.421646] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58033 ] 00:05:25.977 [2024-09-28 16:06:40.586005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.237 Running I/O for 1 seconds...[2024-09-28 16:06:40.832927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.237 [2024-09-28 16:06:40.833215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.237 [2024-09-28 16:06:40.833122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.237 [2024-09-28 16:06:40.833314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.615 00:05:27.615 lcore 0: 73015 00:05:27.615 lcore 1: 73018 00:05:27.615 lcore 2: 73021 00:05:27.615 lcore 3: 73025 00:05:27.615 done. 00:05:27.615 00:05:27.615 real 0m1.871s 00:05:27.615 user 0m4.588s 00:05:27.615 sys 0m0.159s 00:05:27.615 ************************************ 00:05:27.615 END TEST event_perf 00:05:27.615 ************************************ 00:05:27.615 16:06:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.615 16:06:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 16:06:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.875 16:06:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:27.875 16:06:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.875 16:06:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 ************************************ 00:05:27.875 START TEST event_reactor 00:05:27.875 ************************************ 00:05:27.875 16:06:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.875 [2024-09-28 16:06:42.364038] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:27.875 [2024-09-28 16:06:42.364286] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58078 ] 00:05:27.875 [2024-09-28 16:06:42.526133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.134 [2024-09-28 16:06:42.764158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.514 test_start 00:05:29.514 oneshot 00:05:29.514 tick 100 00:05:29.514 tick 100 00:05:29.514 tick 250 00:05:29.514 tick 100 00:05:29.514 tick 100 00:05:29.514 tick 100 00:05:29.514 tick 250 00:05:29.514 tick 500 00:05:29.514 tick 100 00:05:29.514 tick 100 00:05:29.514 tick 250 00:05:29.514 tick 100 00:05:29.514 tick 100 00:05:29.514 test_end 00:05:29.514 00:05:29.514 real 0m1.839s 00:05:29.514 user 0m1.607s 00:05:29.514 sys 0m0.123s 00:05:29.514 ************************************ 00:05:29.514 END TEST event_reactor 00:05:29.514 ************************************ 00:05:29.514 16:06:44 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.514 16:06:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 16:06:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.772 16:06:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:29.772 16:06:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.772 16:06:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 ************************************ 00:05:29.772 START TEST event_reactor_perf 00:05:29.772 ************************************ 00:05:29.772 16:06:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.772 [2024-09-28 16:06:44.270331] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:29.772 [2024-09-28 16:06:44.270421] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58120 ] 00:05:29.772 [2024-09-28 16:06:44.431841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.032 [2024-09-28 16:06:44.671160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.410 test_start 00:05:31.410 test_end 00:05:31.410 Performance: 418818 events per second 00:05:31.410 00:05:31.410 real 0m1.839s 00:05:31.410 user 0m1.606s 00:05:31.410 sys 0m0.124s 00:05:31.410 ************************************ 00:05:31.410 END TEST event_reactor_perf 00:05:31.410 ************************************ 00:05:31.410 16:06:46 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.411 16:06:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.670 16:06:46 event -- event/event.sh@49 -- # uname -s 00:05:31.670 16:06:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.670 16:06:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.670 16:06:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.670 16:06:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.670 16:06:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.670 ************************************ 00:05:31.670 START TEST event_scheduler 00:05:31.670 ************************************ 00:05:31.670 16:06:46 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.670 * Looking for test storage... 00:05:31.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:31.670 16:06:46 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.670 16:06:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.670 16:06:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.670 16:06:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.670 16:06:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.930 16:06:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.930 --rc genhtml_branch_coverage=1 00:05:31.930 --rc genhtml_function_coverage=1 00:05:31.930 --rc genhtml_legend=1 00:05:31.930 --rc geninfo_all_blocks=1 00:05:31.930 --rc geninfo_unexecuted_blocks=1 00:05:31.930 00:05:31.930 ' 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.930 --rc genhtml_branch_coverage=1 00:05:31.930 --rc genhtml_function_coverage=1 00:05:31.930 --rc genhtml_legend=1 00:05:31.930 --rc geninfo_all_blocks=1 00:05:31.930 --rc geninfo_unexecuted_blocks=1 00:05:31.930 00:05:31.930 ' 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.930 --rc genhtml_branch_coverage=1 00:05:31.930 --rc genhtml_function_coverage=1 00:05:31.930 --rc genhtml_legend=1 00:05:31.930 --rc geninfo_all_blocks=1 00:05:31.930 --rc geninfo_unexecuted_blocks=1 00:05:31.930 00:05:31.930 ' 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.930 --rc genhtml_branch_coverage=1 00:05:31.930 --rc genhtml_function_coverage=1 00:05:31.930 --rc genhtml_legend=1 00:05:31.930 --rc geninfo_all_blocks=1 00:05:31.930 --rc geninfo_unexecuted_blocks=1 00:05:31.930 00:05:31.930 ' 00:05:31.930 16:06:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.930 16:06:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58196 00:05:31.930 16:06:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.930 16:06:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.930 16:06:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58196 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58196 ']' 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.930 16:06:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.930 [2024-09-28 16:06:46.453675] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:31.930 [2024-09-28 16:06:46.453856] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58196 ] 00:05:32.190 [2024-09-28 16:06:46.616720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.190 [2024-09-28 16:06:46.862341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.190 [2024-09-28 16:06:46.862626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.190 [2024-09-28 16:06:46.862591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.190 [2024-09-28 16:06:46.862442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.758 16:06:47 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.758 16:06:47 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:32.758 16:06:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.758 16:06:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.758 16:06:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.758 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.758 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.758 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.758 POWER: Cannot set governor of lcore 0 to performance 00:05:32.758 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.758 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.758 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.758 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.758 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:32.758 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:32.758 POWER: Unable to set Power Management Environment for lcore 0 00:05:32.758 [2024-09-28 16:06:47.292633] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:32.758 [2024-09-28 16:06:47.292718] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:32.758 [2024-09-28 16:06:47.292811] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.758 [2024-09-28 16:06:47.292914] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.758 [2024-09-28 16:06:47.292993] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.758 [2024-09-28 16:06:47.293079] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.758 16:06:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.758 16:06:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.758 16:06:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.758 16:06:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.018 [2024-09-28 16:06:47.672760] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.018 16:06:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.018 16:06:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.018 16:06:47 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.018 16:06:47 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.018 16:06:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.018 ************************************ 00:05:33.018 START TEST scheduler_create_thread 00:05:33.018 ************************************ 00:05:33.018 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:33.018 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.018 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.018 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.018 2 00:05:33.018 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.018 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 3 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 4 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 5 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 6 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 7 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 8 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 9 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 10 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.278 16:06:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.216 16:06:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.216 16:06:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.216 16:06:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.216 16:06:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.596 16:06:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.596 16:06:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.596 16:06:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.596 16:06:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.596 16:06:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.534 16:06:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.534 ************************************ 00:05:36.534 END TEST scheduler_create_thread 00:05:36.534 ************************************ 00:05:36.534 00:05:36.534 real 0m3.375s 00:05:36.534 user 0m0.032s 00:05:36.534 sys 0m0.006s 00:05:36.534 16:06:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.534 16:06:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.534 16:06:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.534 16:06:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58196 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58196 ']' 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58196 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58196 00:05:36.534 killing process with pid 58196 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58196' 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58196 00:05:36.534 16:06:51 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58196 00:05:36.794 [2024-09-28 16:06:51.440964] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.199 00:05:38.199 real 0m6.712s 00:05:38.199 user 0m13.411s 00:05:38.199 sys 0m0.614s 00:05:38.199 16:06:52 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.199 16:06:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.199 ************************************ 00:05:38.199 END TEST event_scheduler 00:05:38.199 ************************************ 00:05:38.458 16:06:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.458 16:06:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.458 16:06:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.458 16:06:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.458 16:06:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.458 ************************************ 00:05:38.458 START TEST app_repeat 00:05:38.458 ************************************ 00:05:38.458 16:06:52 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58313 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58313' 00:05:38.458 Process app_repeat pid: 58313 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.458 spdk_app_start Round 0 00:05:38.458 16:06:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58313 /var/tmp/spdk-nbd.sock 00:05:38.458 16:06:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58313 ']' 00:05:38.458 16:06:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.458 16:06:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.458 16:06:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.458 16:06:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.458 16:06:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.458 [2024-09-28 16:06:52.992826] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:38.458 [2024-09-28 16:06:52.993018] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58313 ] 00:05:38.718 [2024-09-28 16:06:53.158990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.718 [2024-09-28 16:06:53.388369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.718 [2024-09-28 16:06:53.388407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.286 16:06:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.286 16:06:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:39.286 16:06:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.546 Malloc0 00:05:39.546 16:06:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.805 Malloc1 00:05:39.805 16:06:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.805 16:06:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.064 /dev/nbd0 00:05:40.064 16:06:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.064 16:06:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.065 1+0 records in 00:05:40.065 1+0 records out 00:05:40.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544858 s, 7.5 MB/s 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.065 16:06:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.065 16:06:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.065 16:06:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.065 16:06:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.325 /dev/nbd1 00:05:40.325 16:06:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.325 16:06:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.325 1+0 records in 00:05:40.325 1+0 records out 00:05:40.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366203 s, 11.2 MB/s 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.325 16:06:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.325 16:06:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.325 16:06:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.325 16:06:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.325 16:06:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.325 16:06:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.585 { 00:05:40.585 "nbd_device": "/dev/nbd0", 00:05:40.585 "bdev_name": "Malloc0" 00:05:40.585 }, 00:05:40.585 { 00:05:40.585 "nbd_device": "/dev/nbd1", 00:05:40.585 "bdev_name": "Malloc1" 00:05:40.585 } 00:05:40.585 ]' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.585 { 00:05:40.585 "nbd_device": "/dev/nbd0", 00:05:40.585 "bdev_name": "Malloc0" 00:05:40.585 }, 00:05:40.585 { 00:05:40.585 "nbd_device": "/dev/nbd1", 00:05:40.585 "bdev_name": "Malloc1" 00:05:40.585 } 00:05:40.585 ]' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.585 /dev/nbd1' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.585 /dev/nbd1' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.585 256+0 records in 00:05:40.585 256+0 records out 00:05:40.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120312 s, 87.2 MB/s 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.585 256+0 records in 00:05:40.585 256+0 records out 00:05:40.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250101 s, 41.9 MB/s 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.585 256+0 records in 00:05:40.585 256+0 records out 00:05:40.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288016 s, 36.4 MB/s 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.585 16:06:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.845 16:06:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.105 16:06:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.393 16:06:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.393 16:06:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.661 16:06:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.038 [2024-09-28 16:06:57.598637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.296 [2024-09-28 16:06:57.799041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.296 [2024-09-28 16:06:57.799045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.555 [2024-09-28 16:06:57.986045] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.555 [2024-09-28 16:06:57.986162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.932 16:06:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.932 spdk_app_start Round 1 00:05:44.932 16:06:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.932 16:06:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58313 /var/tmp/spdk-nbd.sock 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58313 ']' 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.932 16:06:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:44.932 16:06:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.191 Malloc0 00:05:45.191 16:06:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.451 Malloc1 00:05:45.451 16:07:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.451 16:07:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.710 /dev/nbd0 00:05:45.710 16:07:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.710 16:07:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.710 1+0 records in 00:05:45.710 1+0 records out 00:05:45.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350897 s, 11.7 MB/s 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.710 16:07:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.710 16:07:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.710 16:07:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.710 16:07:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.970 /dev/nbd1 00:05:45.970 16:07:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.970 16:07:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.970 1+0 records in 00:05:45.970 1+0 records out 00:05:45.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255947 s, 16.0 MB/s 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.970 16:07:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.970 16:07:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.970 16:07:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.970 16:07:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.970 16:07:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.970 16:07:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.230 { 00:05:46.230 "nbd_device": "/dev/nbd0", 00:05:46.230 "bdev_name": "Malloc0" 00:05:46.230 }, 00:05:46.230 { 00:05:46.230 "nbd_device": "/dev/nbd1", 00:05:46.230 "bdev_name": "Malloc1" 00:05:46.230 } 00:05:46.230 ]' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.230 { 00:05:46.230 "nbd_device": "/dev/nbd0", 00:05:46.230 "bdev_name": "Malloc0" 00:05:46.230 }, 00:05:46.230 { 00:05:46.230 "nbd_device": "/dev/nbd1", 00:05:46.230 "bdev_name": "Malloc1" 00:05:46.230 } 00:05:46.230 ]' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.230 /dev/nbd1' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.230 /dev/nbd1' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.230 256+0 records in 00:05:46.230 256+0 records out 00:05:46.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012656 s, 82.9 MB/s 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.230 256+0 records in 00:05:46.230 256+0 records out 00:05:46.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251002 s, 41.8 MB/s 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.230 256+0 records in 00:05:46.230 256+0 records out 00:05:46.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299641 s, 35.0 MB/s 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.230 16:07:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.489 16:07:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.489 16:07:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.489 16:07:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.489 16:07:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.489 16:07:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.489 16:07:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.489 16:07:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.489 16:07:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.748 16:07:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.007 16:07:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.007 16:07:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.574 16:07:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.948 [2024-09-28 16:07:03.343289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.948 [2024-09-28 16:07:03.568056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.948 [2024-09-28 16:07:03.568083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.207 [2024-09-28 16:07:03.782577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.207 [2024-09-28 16:07:03.782645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.584 spdk_app_start Round 2 00:05:50.584 16:07:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.584 16:07:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.584 16:07:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58313 /var/tmp/spdk-nbd.sock 00:05:50.584 16:07:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58313 ']' 00:05:50.585 16:07:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.585 16:07:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.585 16:07:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.585 16:07:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.585 16:07:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.585 16:07:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.585 16:07:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:50.585 16:07:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.843 Malloc0 00:05:50.843 16:07:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.102 Malloc1 00:05:51.102 16:07:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.102 16:07:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.359 /dev/nbd0 00:05:51.359 16:07:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.359 16:07:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.359 1+0 records in 00:05:51.359 1+0 records out 00:05:51.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284597 s, 14.4 MB/s 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.359 16:07:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.359 16:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.359 16:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.359 16:07:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.617 /dev/nbd1 00:05:51.617 16:07:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.617 16:07:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.617 1+0 records in 00:05:51.617 1+0 records out 00:05:51.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346365 s, 11.8 MB/s 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.617 16:07:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.617 16:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.617 16:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.617 16:07:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.617 16:07:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.617 16:07:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.876 { 00:05:51.876 "nbd_device": "/dev/nbd0", 00:05:51.876 "bdev_name": "Malloc0" 00:05:51.876 }, 00:05:51.876 { 00:05:51.876 "nbd_device": "/dev/nbd1", 00:05:51.876 "bdev_name": "Malloc1" 00:05:51.876 } 00:05:51.876 ]' 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.876 { 00:05:51.876 "nbd_device": "/dev/nbd0", 00:05:51.876 "bdev_name": "Malloc0" 00:05:51.876 }, 00:05:51.876 { 00:05:51.876 "nbd_device": "/dev/nbd1", 00:05:51.876 "bdev_name": "Malloc1" 00:05:51.876 } 00:05:51.876 ]' 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.876 /dev/nbd1' 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.876 /dev/nbd1' 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.876 256+0 records in 00:05:51.876 256+0 records out 00:05:51.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126009 s, 83.2 MB/s 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.876 256+0 records in 00:05:51.876 256+0 records out 00:05:51.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215649 s, 48.6 MB/s 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.876 16:07:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.135 256+0 records in 00:05:52.135 256+0 records out 00:05:52.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256521 s, 40.9 MB/s 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.135 16:07:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.394 16:07:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.394 16:07:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.652 16:07:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.652 16:07:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.220 16:07:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.600 [2024-09-28 16:07:09.060599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.600 [2024-09-28 16:07:09.283822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.600 [2024-09-28 16:07:09.283825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.859 [2024-09-28 16:07:09.501194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.859 [2024-09-28 16:07:09.501291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.241 16:07:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58313 /var/tmp/spdk-nbd.sock 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58313 ']' 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:56.241 16:07:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58313 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58313 ']' 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58313 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.241 16:07:10 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58313 00:05:56.533 16:07:10 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.533 16:07:10 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.533 16:07:10 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58313' 00:05:56.533 killing process with pid 58313 00:05:56.533 16:07:10 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58313 00:05:56.533 16:07:10 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58313 00:05:57.481 spdk_app_start is called in Round 0. 00:05:57.481 Shutdown signal received, stop current app iteration 00:05:57.481 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:57.481 spdk_app_start is called in Round 1. 00:05:57.481 Shutdown signal received, stop current app iteration 00:05:57.481 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:57.481 spdk_app_start is called in Round 2. 00:05:57.481 Shutdown signal received, stop current app iteration 00:05:57.481 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:57.481 spdk_app_start is called in Round 3. 00:05:57.481 Shutdown signal received, stop current app iteration 00:05:57.739 16:07:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:57.739 16:07:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:57.739 00:05:57.739 real 0m19.266s 00:05:57.739 user 0m39.873s 00:05:57.739 sys 0m2.814s 00:05:57.739 16:07:12 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.739 16:07:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.739 ************************************ 00:05:57.739 END TEST app_repeat 00:05:57.739 ************************************ 00:05:57.739 16:07:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:57.739 16:07:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:57.739 16:07:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.739 16:07:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.739 16:07:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.739 ************************************ 00:05:57.739 START TEST cpu_locks 00:05:57.739 ************************************ 00:05:57.739 16:07:12 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:57.739 * Looking for test storage... 00:05:57.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.739 16:07:12 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.739 16:07:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.739 16:07:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.998 16:07:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.998 16:07:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:57.998 16:07:12 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.998 16:07:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.998 --rc genhtml_branch_coverage=1 00:05:57.998 --rc genhtml_function_coverage=1 00:05:57.998 --rc genhtml_legend=1 00:05:57.998 --rc geninfo_all_blocks=1 00:05:57.998 --rc geninfo_unexecuted_blocks=1 00:05:57.998 00:05:57.998 ' 00:05:57.998 16:07:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.998 --rc genhtml_branch_coverage=1 00:05:57.998 --rc genhtml_function_coverage=1 00:05:57.998 --rc genhtml_legend=1 00:05:57.999 --rc geninfo_all_blocks=1 00:05:57.999 --rc geninfo_unexecuted_blocks=1 00:05:57.999 00:05:57.999 ' 00:05:57.999 16:07:12 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.999 --rc genhtml_branch_coverage=1 00:05:57.999 --rc genhtml_function_coverage=1 00:05:57.999 --rc genhtml_legend=1 00:05:57.999 --rc geninfo_all_blocks=1 00:05:57.999 --rc geninfo_unexecuted_blocks=1 00:05:57.999 00:05:57.999 ' 00:05:57.999 16:07:12 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.999 --rc genhtml_branch_coverage=1 00:05:57.999 --rc genhtml_function_coverage=1 00:05:57.999 --rc genhtml_legend=1 00:05:57.999 --rc geninfo_all_blocks=1 00:05:57.999 --rc geninfo_unexecuted_blocks=1 00:05:57.999 00:05:57.999 ' 00:05:57.999 16:07:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:57.999 16:07:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:57.999 16:07:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:57.999 16:07:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:57.999 16:07:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.999 16:07:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.999 16:07:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.999 ************************************ 00:05:57.999 START TEST default_locks 00:05:57.999 ************************************ 00:05:57.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58759 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58759 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58759 ']' 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.999 16:07:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.999 [2024-09-28 16:07:12.619912] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:57.999 [2024-09-28 16:07:12.620162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58759 ] 00:05:58.258 [2024-09-28 16:07:12.789820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.518 [2024-09-28 16:07:13.039886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.452 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.452 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:59.452 16:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58759 00:05:59.452 16:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58759 00:05:59.452 16:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58759 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58759 ']' 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58759 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58759 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.710 killing process with pid 58759 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58759' 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58759 00:05:59.710 16:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58759 00:06:02.241 16:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58759 00:06:02.242 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:02.242 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58759 00:06:02.242 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:02.242 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.242 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58759 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58759 ']' 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.501 ERROR: process (pid: 58759) is no longer running 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58759) - No such process 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.501 00:06:02.501 real 0m4.422s 00:06:02.501 user 0m4.094s 00:06:02.501 sys 0m0.826s 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.501 ************************************ 00:06:02.501 END TEST default_locks 00:06:02.501 ************************************ 00:06:02.501 16:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 16:07:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:02.501 16:07:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.501 16:07:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.501 16:07:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 ************************************ 00:06:02.501 START TEST default_locks_via_rpc 00:06:02.501 ************************************ 00:06:02.501 16:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:02.501 16:07:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58835 00:06:02.501 16:07:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58835 00:06:02.501 16:07:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.501 16:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58835 ']' 00:06:02.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.501 16:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.501 16:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.501 16:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.501 16:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.501 16:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 [2024-09-28 16:07:17.096411] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:02.501 [2024-09-28 16:07:17.096623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58835 ] 00:06:02.759 [2024-09-28 16:07:17.263508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.017 [2024-09-28 16:07:17.500921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58835 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58835 00:06:03.953 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58835 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58835 ']' 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58835 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58835 00:06:04.212 killing process with pid 58835 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58835' 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58835 00:06:04.212 16:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58835 00:06:07.502 ************************************ 00:06:07.502 END TEST default_locks_via_rpc 00:06:07.502 ************************************ 00:06:07.502 00:06:07.502 real 0m4.436s 00:06:07.502 user 0m4.174s 00:06:07.502 sys 0m0.770s 00:06:07.502 16:07:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.502 16:07:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.502 16:07:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.502 16:07:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.502 16:07:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.502 16:07:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.502 ************************************ 00:06:07.502 START TEST non_locking_app_on_locked_coremask 00:06:07.502 ************************************ 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58915 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58915 /var/tmp/spdk.sock 00:06:07.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58915 ']' 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.502 16:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.502 [2024-09-28 16:07:21.612318] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:07.502 [2024-09-28 16:07:21.612452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58915 ] 00:06:07.502 [2024-09-28 16:07:21.781681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.502 [2024-09-28 16:07:22.025471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58931 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58931 /var/tmp/spdk2.sock 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58931 ']' 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.438 16:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.438 [2024-09-28 16:07:23.107927] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:08.438 [2024-09-28 16:07:23.108128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58931 ] 00:06:08.696 [2024-09-28 16:07:23.268337] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.696 [2024-09-28 16:07:23.268403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.262 [2024-09-28 16:07:23.764306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.164 16:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.164 16:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.164 16:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58915 00:06:11.164 16:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58915 00:06:11.164 16:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58915 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58915 ']' 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58915 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58915 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.423 killing process with pid 58915 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58915' 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58915 00:06:11.423 16:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58915 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58931 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58931 ']' 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58931 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58931 00:06:16.693 killing process with pid 58931 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58931' 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58931 00:06:16.693 16:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58931 00:06:19.975 ************************************ 00:06:19.975 END TEST non_locking_app_on_locked_coremask 00:06:19.975 ************************************ 00:06:19.975 00:06:19.975 real 0m12.488s 00:06:19.975 user 0m12.223s 00:06:19.975 sys 0m1.646s 00:06:19.975 16:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.975 16:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.975 16:07:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:19.975 16:07:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.975 16:07:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.975 16:07:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.975 ************************************ 00:06:19.975 START TEST locking_app_on_unlocked_coremask 00:06:19.975 ************************************ 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59093 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59093 /var/tmp/spdk.sock 00:06:19.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59093 ']' 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.975 16:07:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.975 [2024-09-28 16:07:34.160477] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:19.975 [2024-09-28 16:07:34.160747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59093 ] 00:06:19.975 [2024-09-28 16:07:34.330097] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.975 [2024-09-28 16:07:34.330180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.975 [2024-09-28 16:07:34.576182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59109 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59109 /var/tmp/spdk2.sock 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59109 ']' 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.909 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.910 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.910 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.910 16:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.168 [2024-09-28 16:07:35.663128] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:21.168 [2024-09-28 16:07:35.663320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59109 ] 00:06:21.168 [2024-09-28 16:07:35.815018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.734 [2024-09-28 16:07:36.298874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.636 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.636 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:23.636 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59109 00:06:23.636 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59109 00:06:23.636 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.571 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59093 00:06:24.571 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59093 ']' 00:06:24.571 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59093 00:06:24.571 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.571 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.571 16:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59093 00:06:24.571 16:07:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.572 killing process with pid 59093 00:06:24.572 16:07:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.572 16:07:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59093' 00:06:24.572 16:07:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59093 00:06:24.572 16:07:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59093 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59109 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59109 ']' 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59109 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59109 00:06:29.839 killing process with pid 59109 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59109' 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59109 00:06:29.839 16:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59109 00:06:32.408 00:06:32.408 real 0m12.875s 00:06:32.408 user 0m12.695s 00:06:32.408 sys 0m1.758s 00:06:32.408 16:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.408 ************************************ 00:06:32.408 END TEST locking_app_on_unlocked_coremask 00:06:32.408 16:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.408 ************************************ 00:06:32.408 16:07:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:32.408 16:07:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.408 16:07:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.408 16:07:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.408 ************************************ 00:06:32.408 START TEST locking_app_on_locked_coremask 00:06:32.408 ************************************ 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59276 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59276 /var/tmp/spdk.sock 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59276 ']' 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.408 16:07:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.667 [2024-09-28 16:07:47.110825] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:32.667 [2024-09-28 16:07:47.111034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59276 ] 00:06:32.667 [2024-09-28 16:07:47.298458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.925 [2024-09-28 16:07:47.545872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59292 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59292 /var/tmp/spdk2.sock 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59292 /var/tmp/spdk2.sock 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59292 /var/tmp/spdk2.sock 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59292 ']' 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.302 16:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.302 [2024-09-28 16:07:48.635948] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:34.302 [2024-09-28 16:07:48.636175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59292 ] 00:06:34.302 [2024-09-28 16:07:48.794827] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59276 has claimed it. 00:06:34.302 [2024-09-28 16:07:48.794918] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.870 ERROR: process (pid: 59292) is no longer running 00:06:34.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59292) - No such process 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59276 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59276 00:06:34.870 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59276 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59276 ']' 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59276 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59276 00:06:35.129 killing process with pid 59276 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59276' 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59276 00:06:35.129 16:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59276 00:06:38.427 00:06:38.427 real 0m5.399s 00:06:38.427 user 0m5.321s 00:06:38.427 sys 0m1.044s 00:06:38.427 16:07:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.427 ************************************ 00:06:38.427 END TEST locking_app_on_locked_coremask 00:06:38.427 ************************************ 00:06:38.427 16:07:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.427 16:07:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:38.427 16:07:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.427 16:07:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.427 16:07:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.427 ************************************ 00:06:38.427 START TEST locking_overlapped_coremask 00:06:38.427 ************************************ 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59367 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59367 /var/tmp/spdk.sock 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59367 ']' 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.427 16:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.427 [2024-09-28 16:07:52.592745] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:38.427 [2024-09-28 16:07:52.592883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59367 ] 00:06:38.427 [2024-09-28 16:07:52.763947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.427 [2024-09-28 16:07:53.012880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.427 [2024-09-28 16:07:53.013011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.427 [2024-09-28 16:07:53.013053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59390 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59390 /var/tmp/spdk2.sock 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59390 /var/tmp/spdk2.sock 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:39.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59390 /var/tmp/spdk2.sock 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59390 ']' 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.364 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.623 [2024-09-28 16:07:54.111121] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:39.623 [2024-09-28 16:07:54.111266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:06:39.623 [2024-09-28 16:07:54.269684] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59367 has claimed it. 00:06:39.623 [2024-09-28 16:07:54.269747] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.190 ERROR: process (pid: 59390) is no longer running 00:06:40.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59390) - No such process 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59367 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59367 ']' 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59367 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59367 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59367' 00:06:40.190 killing process with pid 59367 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59367 00:06:40.190 16:07:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59367 00:06:43.475 00:06:43.475 real 0m5.027s 00:06:43.475 user 0m13.018s 00:06:43.475 sys 0m0.795s 00:06:43.475 16:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.475 16:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.475 ************************************ 00:06:43.475 END TEST locking_overlapped_coremask 00:06:43.475 ************************************ 00:06:43.475 16:07:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.475 16:07:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.476 16:07:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.476 16:07:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.476 ************************************ 00:06:43.476 START TEST locking_overlapped_coremask_via_rpc 00:06:43.476 ************************************ 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59460 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59460 /var/tmp/spdk.sock 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59460 ']' 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.476 16:07:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.476 [2024-09-28 16:07:57.692430] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:43.476 [2024-09-28 16:07:57.692562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:06:43.476 [2024-09-28 16:07:57.863390] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.476 [2024-09-28 16:07:57.863469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.476 [2024-09-28 16:07:58.111086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.476 [2024-09-28 16:07:58.111251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.476 [2024-09-28 16:07:58.111339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59478 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59478 /var/tmp/spdk2.sock 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59478 ']' 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.852 16:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.852 [2024-09-28 16:07:59.196206] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:44.852 [2024-09-28 16:07:59.196452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59478 ] 00:06:44.852 [2024-09-28 16:07:59.360414] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.852 [2024-09-28 16:07:59.360481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.417 [2024-09-28 16:07:59.880496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.417 [2024-09-28 16:07:59.883268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.417 [2024-09-28 16:07:59.883269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.316 [2024-09-28 16:08:01.887401] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59460 has claimed it. 00:06:47.316 request: 00:06:47.316 { 00:06:47.316 "method": "framework_enable_cpumask_locks", 00:06:47.316 "req_id": 1 00:06:47.316 } 00:06:47.316 Got JSON-RPC error response 00:06:47.316 response: 00:06:47.316 { 00:06:47.316 "code": -32603, 00:06:47.316 "message": "Failed to claim CPU core: 2" 00:06:47.316 } 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59460 /var/tmp/spdk.sock 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59460 ']' 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.316 16:08:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59478 /var/tmp/spdk2.sock 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59478 ']' 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.575 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.833 00:06:47.833 real 0m4.738s 00:06:47.833 user 0m1.251s 00:06:47.833 sys 0m0.214s 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.833 16:08:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.833 ************************************ 00:06:47.833 END TEST locking_overlapped_coremask_via_rpc 00:06:47.833 ************************************ 00:06:47.833 16:08:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.833 16:08:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59460 ]] 00:06:47.833 16:08:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59460 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59460 ']' 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59460 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59460 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59460' 00:06:47.833 killing process with pid 59460 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59460 00:06:47.833 16:08:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59460 00:06:51.121 16:08:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59478 ]] 00:06:51.121 16:08:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59478 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59478 ']' 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59478 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59478 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59478' 00:06:51.121 killing process with pid 59478 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59478 00:06:51.121 16:08:05 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59478 00:06:53.656 16:08:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.656 Process with pid 59460 is not found 00:06:53.656 Process with pid 59478 is not found 00:06:53.656 16:08:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.656 16:08:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59460 ]] 00:06:53.656 16:08:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59460 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59460 ']' 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59460 00:06:53.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59460) - No such process 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59460 is not found' 00:06:53.656 16:08:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59478 ]] 00:06:53.656 16:08:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59478 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59478 ']' 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59478 00:06:53.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59478) - No such process 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59478 is not found' 00:06:53.656 16:08:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.656 00:06:53.656 real 0m55.680s 00:06:53.656 user 1m31.059s 00:06:53.656 sys 0m8.696s 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.656 16:08:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.656 ************************************ 00:06:53.656 END TEST cpu_locks 00:06:53.656 ************************************ 00:06:53.656 ************************************ 00:06:53.656 END TEST event 00:06:53.656 ************************************ 00:06:53.656 00:06:53.656 real 1m27.860s 00:06:53.656 user 2m32.394s 00:06:53.656 sys 0m12.952s 00:06:53.656 16:08:07 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.656 16:08:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.656 16:08:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:53.656 16:08:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.656 16:08:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.656 16:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.656 ************************************ 00:06:53.656 START TEST thread 00:06:53.656 ************************************ 00:06:53.656 16:08:08 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:53.656 * Looking for test storage... 00:06:53.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:53.656 16:08:08 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.656 16:08:08 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.656 16:08:08 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.656 16:08:08 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.656 16:08:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.656 16:08:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.656 16:08:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.656 16:08:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.656 16:08:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.656 16:08:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.656 16:08:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.656 16:08:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.656 16:08:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.657 16:08:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.657 16:08:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.657 16:08:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:53.657 16:08:08 thread -- scripts/common.sh@345 -- # : 1 00:06:53.657 16:08:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.657 16:08:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.657 16:08:08 thread -- scripts/common.sh@365 -- # decimal 1 00:06:53.657 16:08:08 thread -- scripts/common.sh@353 -- # local d=1 00:06:53.657 16:08:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.657 16:08:08 thread -- scripts/common.sh@355 -- # echo 1 00:06:53.657 16:08:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.657 16:08:08 thread -- scripts/common.sh@366 -- # decimal 2 00:06:53.657 16:08:08 thread -- scripts/common.sh@353 -- # local d=2 00:06:53.657 16:08:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.657 16:08:08 thread -- scripts/common.sh@355 -- # echo 2 00:06:53.657 16:08:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.657 16:08:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.657 16:08:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.657 16:08:08 thread -- scripts/common.sh@368 -- # return 0 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.657 --rc genhtml_branch_coverage=1 00:06:53.657 --rc genhtml_function_coverage=1 00:06:53.657 --rc genhtml_legend=1 00:06:53.657 --rc geninfo_all_blocks=1 00:06:53.657 --rc geninfo_unexecuted_blocks=1 00:06:53.657 00:06:53.657 ' 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.657 --rc genhtml_branch_coverage=1 00:06:53.657 --rc genhtml_function_coverage=1 00:06:53.657 --rc genhtml_legend=1 00:06:53.657 --rc geninfo_all_blocks=1 00:06:53.657 --rc geninfo_unexecuted_blocks=1 00:06:53.657 00:06:53.657 ' 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.657 --rc genhtml_branch_coverage=1 00:06:53.657 --rc genhtml_function_coverage=1 00:06:53.657 --rc genhtml_legend=1 00:06:53.657 --rc geninfo_all_blocks=1 00:06:53.657 --rc geninfo_unexecuted_blocks=1 00:06:53.657 00:06:53.657 ' 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.657 --rc genhtml_branch_coverage=1 00:06:53.657 --rc genhtml_function_coverage=1 00:06:53.657 --rc genhtml_legend=1 00:06:53.657 --rc geninfo_all_blocks=1 00:06:53.657 --rc geninfo_unexecuted_blocks=1 00:06:53.657 00:06:53.657 ' 00:06:53.657 16:08:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.657 16:08:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.657 ************************************ 00:06:53.657 START TEST thread_poller_perf 00:06:53.657 ************************************ 00:06:53.657 16:08:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.916 [2024-09-28 16:08:08.356920] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:53.916 [2024-09-28 16:08:08.357121] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:06:53.916 [2024-09-28 16:08:08.519301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.175 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.175 [2024-09-28 16:08:08.768369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.551 ====================================== 00:06:55.551 busy:2300875430 (cyc) 00:06:55.551 total_run_count: 433000 00:06:55.551 tsc_hz: 2290000000 (cyc) 00:06:55.551 ====================================== 00:06:55.551 poller_cost: 5313 (cyc), 2320 (nsec) 00:06:55.551 00:06:55.551 real 0m1.871s 00:06:55.551 user 0m1.636s 00:06:55.551 sys 0m0.125s 00:06:55.551 16:08:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.551 ************************************ 00:06:55.551 END TEST thread_poller_perf 00:06:55.551 ************************************ 00:06:55.551 16:08:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.810 16:08:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.810 16:08:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:55.810 16:08:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.810 16:08:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.810 ************************************ 00:06:55.810 START TEST thread_poller_perf 00:06:55.810 ************************************ 00:06:55.810 16:08:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.810 [2024-09-28 16:08:10.286633] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:55.810 [2024-09-28 16:08:10.286731] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:06:55.810 [2024-09-28 16:08:10.451429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.069 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.069 [2024-09-28 16:08:10.686805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.446 ====================================== 00:06:57.446 busy:2294052690 (cyc) 00:06:57.446 total_run_count: 5680000 00:06:57.446 tsc_hz: 2290000000 (cyc) 00:06:57.446 ====================================== 00:06:57.446 poller_cost: 403 (cyc), 175 (nsec) 00:06:57.446 00:06:57.446 real 0m1.838s 00:06:57.446 user 0m1.600s 00:06:57.446 sys 0m0.131s 00:06:57.446 ************************************ 00:06:57.446 END TEST thread_poller_perf 00:06:57.446 ************************************ 00:06:57.446 16:08:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.446 16:08:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.705 16:08:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.705 ************************************ 00:06:57.705 END TEST thread 00:06:57.705 ************************************ 00:06:57.705 00:06:57.705 real 0m4.082s 00:06:57.705 user 0m3.401s 00:06:57.705 sys 0m0.471s 00:06:57.705 16:08:12 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.705 16:08:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.705 16:08:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:57.705 16:08:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.705 16:08:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.705 16:08:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.705 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.705 ************************************ 00:06:57.705 START TEST app_cmdline 00:06:57.705 ************************************ 00:06:57.705 16:08:12 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.705 * Looking for test storage... 00:06:57.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.705 16:08:12 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.705 16:08:12 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.705 16:08:12 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.964 16:08:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.964 --rc genhtml_branch_coverage=1 00:06:57.964 --rc genhtml_function_coverage=1 00:06:57.964 --rc genhtml_legend=1 00:06:57.964 --rc geninfo_all_blocks=1 00:06:57.964 --rc geninfo_unexecuted_blocks=1 00:06:57.964 00:06:57.964 ' 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.964 --rc genhtml_branch_coverage=1 00:06:57.964 --rc genhtml_function_coverage=1 00:06:57.964 --rc genhtml_legend=1 00:06:57.964 --rc geninfo_all_blocks=1 00:06:57.964 --rc geninfo_unexecuted_blocks=1 00:06:57.964 00:06:57.964 ' 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.964 --rc genhtml_branch_coverage=1 00:06:57.964 --rc genhtml_function_coverage=1 00:06:57.964 --rc genhtml_legend=1 00:06:57.964 --rc geninfo_all_blocks=1 00:06:57.964 --rc geninfo_unexecuted_blocks=1 00:06:57.964 00:06:57.964 ' 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.964 --rc genhtml_branch_coverage=1 00:06:57.964 --rc genhtml_function_coverage=1 00:06:57.964 --rc genhtml_legend=1 00:06:57.964 --rc geninfo_all_blocks=1 00:06:57.964 --rc geninfo_unexecuted_blocks=1 00:06:57.964 00:06:57.964 ' 00:06:57.964 16:08:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.964 16:08:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.964 16:08:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59815 00:06:57.964 16:08:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59815 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59815 ']' 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.964 16:08:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.964 [2024-09-28 16:08:12.527392] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:57.964 [2024-09-28 16:08:12.527606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59815 ] 00:06:58.223 [2024-09-28 16:08:12.696058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.482 [2024-09-28 16:08:12.933837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.418 16:08:13 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.418 16:08:13 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:59.418 16:08:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:59.418 { 00:06:59.418 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:59.418 "fields": { 00:06:59.418 "major": 25, 00:06:59.418 "minor": 1, 00:06:59.418 "patch": 0, 00:06:59.418 "suffix": "-pre", 00:06:59.418 "commit": "09cc66129" 00:06:59.418 } 00:06:59.418 } 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.677 16:08:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.677 16:08:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.677 16:08:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.677 16:08:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.678 16:08:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.678 16:08:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.678 request: 00:06:59.678 { 00:06:59.678 "method": "env_dpdk_get_mem_stats", 00:06:59.678 "req_id": 1 00:06:59.678 } 00:06:59.678 Got JSON-RPC error response 00:06:59.678 response: 00:06:59.678 { 00:06:59.678 "code": -32601, 00:06:59.678 "message": "Method not found" 00:06:59.678 } 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.678 16:08:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59815 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59815 ']' 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59815 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:59.678 16:08:14 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.937 16:08:14 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59815 00:06:59.937 killing process with pid 59815 00:06:59.937 16:08:14 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.937 16:08:14 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.937 16:08:14 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59815' 00:06:59.937 16:08:14 app_cmdline -- common/autotest_common.sh@969 -- # kill 59815 00:06:59.937 16:08:14 app_cmdline -- common/autotest_common.sh@974 -- # wait 59815 00:07:02.476 ************************************ 00:07:02.476 END TEST app_cmdline 00:07:02.476 ************************************ 00:07:02.476 00:07:02.476 real 0m4.784s 00:07:02.476 user 0m4.698s 00:07:02.476 sys 0m0.793s 00:07:02.476 16:08:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.476 16:08:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.476 16:08:17 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.476 16:08:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.476 16:08:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.476 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:02.476 ************************************ 00:07:02.476 START TEST version 00:07:02.476 ************************************ 00:07:02.476 16:08:17 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.737 * Looking for test storage... 00:07:02.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:02.737 16:08:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.737 16:08:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.737 16:08:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.737 16:08:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.737 16:08:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.737 16:08:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.737 16:08:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.737 16:08:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.737 16:08:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.737 16:08:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.737 16:08:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.737 16:08:17 version -- scripts/common.sh@344 -- # case "$op" in 00:07:02.737 16:08:17 version -- scripts/common.sh@345 -- # : 1 00:07:02.737 16:08:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.737 16:08:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.737 16:08:17 version -- scripts/common.sh@365 -- # decimal 1 00:07:02.737 16:08:17 version -- scripts/common.sh@353 -- # local d=1 00:07:02.737 16:08:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.737 16:08:17 version -- scripts/common.sh@355 -- # echo 1 00:07:02.737 16:08:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.737 16:08:17 version -- scripts/common.sh@366 -- # decimal 2 00:07:02.737 16:08:17 version -- scripts/common.sh@353 -- # local d=2 00:07:02.737 16:08:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.737 16:08:17 version -- scripts/common.sh@355 -- # echo 2 00:07:02.737 16:08:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.737 16:08:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.737 16:08:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.737 16:08:17 version -- scripts/common.sh@368 -- # return 0 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:02.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.737 --rc genhtml_branch_coverage=1 00:07:02.737 --rc genhtml_function_coverage=1 00:07:02.737 --rc genhtml_legend=1 00:07:02.737 --rc geninfo_all_blocks=1 00:07:02.737 --rc geninfo_unexecuted_blocks=1 00:07:02.737 00:07:02.737 ' 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:02.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.737 --rc genhtml_branch_coverage=1 00:07:02.737 --rc genhtml_function_coverage=1 00:07:02.737 --rc genhtml_legend=1 00:07:02.737 --rc geninfo_all_blocks=1 00:07:02.737 --rc geninfo_unexecuted_blocks=1 00:07:02.737 00:07:02.737 ' 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:02.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.737 --rc genhtml_branch_coverage=1 00:07:02.737 --rc genhtml_function_coverage=1 00:07:02.737 --rc genhtml_legend=1 00:07:02.737 --rc geninfo_all_blocks=1 00:07:02.737 --rc geninfo_unexecuted_blocks=1 00:07:02.737 00:07:02.737 ' 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:02.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.737 --rc genhtml_branch_coverage=1 00:07:02.737 --rc genhtml_function_coverage=1 00:07:02.737 --rc genhtml_legend=1 00:07:02.737 --rc geninfo_all_blocks=1 00:07:02.737 --rc geninfo_unexecuted_blocks=1 00:07:02.737 00:07:02.737 ' 00:07:02.737 16:08:17 version -- app/version.sh@17 -- # get_header_version major 00:07:02.737 16:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # cut -f2 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.737 16:08:17 version -- app/version.sh@17 -- # major=25 00:07:02.737 16:08:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:02.737 16:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # cut -f2 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.737 16:08:17 version -- app/version.sh@18 -- # minor=1 00:07:02.737 16:08:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:02.737 16:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # cut -f2 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.737 16:08:17 version -- app/version.sh@19 -- # patch=0 00:07:02.737 16:08:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:02.737 16:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # cut -f2 00:07:02.737 16:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.737 16:08:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:02.737 16:08:17 version -- app/version.sh@22 -- # version=25.1 00:07:02.737 16:08:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.737 16:08:17 version -- app/version.sh@28 -- # version=25.1rc0 00:07:02.737 16:08:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:02.737 16:08:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.737 16:08:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:02.737 16:08:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:02.737 00:07:02.737 real 0m0.330s 00:07:02.737 user 0m0.202s 00:07:02.737 sys 0m0.185s 00:07:02.737 ************************************ 00:07:02.737 END TEST version 00:07:02.737 ************************************ 00:07:02.737 16:08:17 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.737 16:08:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:02.996 16:08:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:02.996 16:08:17 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:02.996 16:08:17 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:02.996 16:08:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.996 16:08:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.996 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:02.996 ************************************ 00:07:02.996 START TEST bdev_raid 00:07:02.996 ************************************ 00:07:02.996 16:08:17 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:02.996 * Looking for test storage... 00:07:02.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:02.996 16:08:17 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:02.996 16:08:17 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:02.996 16:08:17 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:02.996 16:08:17 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.996 16:08:17 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.997 16:08:17 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:02.997 16:08:17 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:02.997 16:08:17 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.997 16:08:17 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.256 16:08:17 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.256 --rc genhtml_branch_coverage=1 00:07:03.256 --rc genhtml_function_coverage=1 00:07:03.256 --rc genhtml_legend=1 00:07:03.256 --rc geninfo_all_blocks=1 00:07:03.256 --rc geninfo_unexecuted_blocks=1 00:07:03.256 00:07:03.256 ' 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.256 --rc genhtml_branch_coverage=1 00:07:03.256 --rc genhtml_function_coverage=1 00:07:03.256 --rc genhtml_legend=1 00:07:03.256 --rc geninfo_all_blocks=1 00:07:03.256 --rc geninfo_unexecuted_blocks=1 00:07:03.256 00:07:03.256 ' 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.256 --rc genhtml_branch_coverage=1 00:07:03.256 --rc genhtml_function_coverage=1 00:07:03.256 --rc genhtml_legend=1 00:07:03.256 --rc geninfo_all_blocks=1 00:07:03.256 --rc geninfo_unexecuted_blocks=1 00:07:03.256 00:07:03.256 ' 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.256 --rc genhtml_branch_coverage=1 00:07:03.256 --rc genhtml_function_coverage=1 00:07:03.256 --rc genhtml_legend=1 00:07:03.256 --rc geninfo_all_blocks=1 00:07:03.256 --rc geninfo_unexecuted_blocks=1 00:07:03.256 00:07:03.256 ' 00:07:03.256 16:08:17 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.256 16:08:17 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.256 16:08:17 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:03.256 16:08:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:03.256 16:08:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:03.256 16:08:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:03.256 16:08:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.256 16:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 ************************************ 00:07:03.256 START TEST raid1_resize_data_offset_test 00:07:03.256 ************************************ 00:07:03.256 Process raid pid: 60008 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60008 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60008' 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60008 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60008 ']' 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.256 16:08:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 [2024-09-28 16:08:17.810775] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:03.256 [2024-09-28 16:08:17.810964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.515 [2024-09-28 16:08:17.978002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.774 [2024-09-28 16:08:18.223184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.032 [2024-09-28 16:08:18.469735] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.032 [2024-09-28 16:08:18.469896] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.033 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.033 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:04.033 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:04.033 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.033 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.291 malloc0 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.291 malloc1 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.291 null0 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.291 [2024-09-28 16:08:18.850307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:04.291 [2024-09-28 16:08:18.852416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:04.291 [2024-09-28 16:08:18.852469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:04.291 [2024-09-28 16:08:18.852643] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:04.291 [2024-09-28 16:08:18.852657] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:04.291 [2024-09-28 16:08:18.852946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:04.291 [2024-09-28 16:08:18.853163] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:04.291 [2024-09-28 16:08:18.853178] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:04.291 [2024-09-28 16:08:18.853388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.291 [2024-09-28 16:08:18.910137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.291 16:08:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.858 malloc2 00:07:04.858 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.858 16:08:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:04.858 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.859 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.117 [2024-09-28 16:08:19.546109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:05.117 [2024-09-28 16:08:19.563296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.117 [2024-09-28 16:08:19.565468] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60008 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60008 ']' 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60008 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60008 00:07:05.117 killing process with pid 60008 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60008' 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60008 00:07:05.117 16:08:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60008 00:07:05.117 [2024-09-28 16:08:19.646074] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.117 [2024-09-28 16:08:19.646449] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:05.117 [2024-09-28 16:08:19.646521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.117 [2024-09-28 16:08:19.646543] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:05.117 [2024-09-28 16:08:19.675802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.117 [2024-09-28 16:08:19.676156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.117 [2024-09-28 16:08:19.676180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:07.021 [2024-09-28 16:08:21.529724] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.427 ************************************ 00:07:08.427 END TEST raid1_resize_data_offset_test 00:07:08.427 ************************************ 00:07:08.427 16:08:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:08.427 00:07:08.427 real 0m5.133s 00:07:08.427 user 0m4.806s 00:07:08.427 sys 0m0.744s 00:07:08.427 16:08:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.427 16:08:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.427 16:08:22 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:08.427 16:08:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.427 16:08:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.427 16:08:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.427 ************************************ 00:07:08.427 START TEST raid0_resize_superblock_test 00:07:08.427 ************************************ 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60097 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60097' 00:07:08.427 Process raid pid: 60097 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60097 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60097 ']' 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.427 16:08:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.427 [2024-09-28 16:08:23.014774] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:08.427 [2024-09-28 16:08:23.014966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.685 [2024-09-28 16:08:23.179708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.943 [2024-09-28 16:08:23.431291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.201 [2024-09-28 16:08:23.664614] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.201 [2024-09-28 16:08:23.664765] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.201 16:08:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.201 16:08:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.201 16:08:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:09.201 16:08:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.201 16:08:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 malloc0 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 [2024-09-28 16:08:24.471697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:10.135 [2024-09-28 16:08:24.471833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.135 [2024-09-28 16:08:24.471886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:10.135 [2024-09-28 16:08:24.471949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.135 [2024-09-28 16:08:24.474398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.135 [2024-09-28 16:08:24.474500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:10.135 pt0 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 41f7e792-2295-47c4-9daa-16c35cac047a 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 dd2020c5-8ac6-4936-958b-152bddf6647d 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 69785d98-0e0c-42d5-bbbd-760c55ba94b7 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 [2024-09-28 16:08:24.679591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev dd2020c5-8ac6-4936-958b-152bddf6647d is claimed 00:07:10.135 [2024-09-28 16:08:24.679690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 69785d98-0e0c-42d5-bbbd-760c55ba94b7 is claimed 00:07:10.135 [2024-09-28 16:08:24.679825] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:10.135 [2024-09-28 16:08:24.679844] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:10.135 [2024-09-28 16:08:24.680123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.135 [2024-09-28 16:08:24.680375] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:10.135 [2024-09-28 16:08:24.680390] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:10.135 [2024-09-28 16:08:24.680562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:10.136 [2024-09-28 16:08:24.791562] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.136 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 [2024-09-28 16:08:24.843536] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:10.395 [2024-09-28 16:08:24.843566] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'dd2020c5-8ac6-4936-958b-152bddf6647d' was resized: old size 131072, new size 204800 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 [2024-09-28 16:08:24.855510] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:10.395 [2024-09-28 16:08:24.855536] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '69785d98-0e0c-42d5-bbbd-760c55ba94b7' was resized: old size 131072, new size 204800 00:07:10.395 [2024-09-28 16:08:24.855565] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 [2024-09-28 16:08:24.967402] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.395 16:08:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 [2024-09-28 16:08:25.011099] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:10.395 [2024-09-28 16:08:25.011253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:10.395 [2024-09-28 16:08:25.011274] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.395 [2024-09-28 16:08:25.011295] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:10.395 [2024-09-28 16:08:25.011400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.395 [2024-09-28 16:08:25.011438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.395 [2024-09-28 16:08:25.011452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 [2024-09-28 16:08:25.023055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:10.395 [2024-09-28 16:08:25.023115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.395 [2024-09-28 16:08:25.023137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:10.395 [2024-09-28 16:08:25.023150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.395 [2024-09-28 16:08:25.025667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.395 [2024-09-28 16:08:25.025774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:10.395 [2024-09-28 16:08:25.027541] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev dd2020c5-8ac6-4936-958b-152bddf6647d 00:07:10.395 [2024-09-28 16:08:25.027616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev dd2020c5-8ac6-4936-958b-152bddf6647d is claimed 00:07:10.395 [2024-09-28 16:08:25.027739] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 69785d98-0e0c-42d5-bbbd-760c55ba94b7 00:07:10.395 [2024-09-28 16:08:25.027761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 69785d98-0e0c-42d5-bbbd-760c55ba94b7 is claimed 00:07:10.395 [2024-09-28 16:08:25.027916] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 69785d98-0e0c-42d5-bbbd-760c55ba94b7 (2) smaller than existing raid bdev Raid (3) 00:07:10.395 [2024-09-28 16:08:25.027942] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev dd2020c5-8ac6-4936-958b-152bddf6647d: File exists 00:07:10.395 [2024-09-28 16:08:25.027982] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:10.395 [2024-09-28 16:08:25.027997] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:10.395 pt0 00:07:10.395 [2024-09-28 16:08:25.028273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:10.395 [2024-09-28 16:08:25.028425] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:10.395 [2024-09-28 16:08:25.028442] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:10.395 [2024-09-28 16:08:25.028585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.396 [2024-09-28 16:08:25.051835] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.396 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60097 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60097 ']' 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60097 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60097 00:07:10.654 killing process with pid 60097 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60097' 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60097 00:07:10.654 [2024-09-28 16:08:25.128270] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.654 [2024-09-28 16:08:25.128338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.654 [2024-09-28 16:08:25.128382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.654 [2024-09-28 16:08:25.128392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:10.654 16:08:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60097 00:07:12.031 [2024-09-28 16:08:26.602779] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.407 16:08:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:13.407 00:07:13.407 real 0m4.999s 00:07:13.407 user 0m4.977s 00:07:13.407 sys 0m0.775s 00:07:13.407 16:08:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.407 ************************************ 00:07:13.407 END TEST raid0_resize_superblock_test 00:07:13.407 ************************************ 00:07:13.407 16:08:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 16:08:27 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:13.407 16:08:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.407 16:08:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.407 16:08:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 ************************************ 00:07:13.407 START TEST raid1_resize_superblock_test 00:07:13.407 ************************************ 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:13.407 Process raid pid: 60198 00:07:13.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60198 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60198' 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60198 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60198 ']' 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.407 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.665 [2024-09-28 16:08:28.098496] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:13.665 [2024-09-28 16:08:28.098703] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.665 [2024-09-28 16:08:28.269088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.924 [2024-09-28 16:08:28.510132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.183 [2024-09-28 16:08:28.740208] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.183 [2024-09-28 16:08:28.740251] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.442 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.442 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:14.442 16:08:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:14.442 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.442 16:08:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.009 malloc0 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.009 [2024-09-28 16:08:29.522987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:15.009 [2024-09-28 16:08:29.523060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.009 [2024-09-28 16:08:29.523086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:15.009 [2024-09-28 16:08:29.523098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.009 [2024-09-28 16:08:29.525535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.009 [2024-09-28 16:08:29.525575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:15.009 pt0 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.009 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 dd32db40-02c9-49da-9d5d-afa7962a8ccf 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 36a856a7-33f1-4d0f-b72e-7ee7bb237cf3 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 a1288655-24e4-4669-8027-16fe8e88b6bd 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 [2024-09-28 16:08:29.728342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 36a856a7-33f1-4d0f-b72e-7ee7bb237cf3 is claimed 00:07:15.268 [2024-09-28 16:08:29.728444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev a1288655-24e4-4669-8027-16fe8e88b6bd is claimed 00:07:15.268 [2024-09-28 16:08:29.728575] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:15.268 [2024-09-28 16:08:29.728592] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:15.268 [2024-09-28 16:08:29.728866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.268 [2024-09-28 16:08:29.729054] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:15.268 [2024-09-28 16:08:29.729064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:15.268 [2024-09-28 16:08:29.729213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 [2024-09-28 16:08:29.840301] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.268 [2024-09-28 16:08:29.888150] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.268 [2024-09-28 16:08:29.888219] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '36a856a7-33f1-4d0f-b72e-7ee7bb237cf3' was resized: old size 131072, new size 204800 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:15.268 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.269 [2024-09-28 16:08:29.900110] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.269 [2024-09-28 16:08:29.900171] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a1288655-24e4-4669-8027-16fe8e88b6bd' was resized: old size 131072, new size 204800 00:07:15.269 [2024-09-28 16:08:29.900201] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.269 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.528 [2024-09-28 16:08:30.004002] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.528 16:08:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.528 [2024-09-28 16:08:30.047726] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:15.528 [2024-09-28 16:08:30.047787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:15.528 [2024-09-28 16:08:30.047822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:15.528 [2024-09-28 16:08:30.047948] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.528 [2024-09-28 16:08:30.048099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.528 [2024-09-28 16:08:30.048162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.528 [2024-09-28 16:08:30.048179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.528 [2024-09-28 16:08:30.059676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:15.528 [2024-09-28 16:08:30.059728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.528 [2024-09-28 16:08:30.059746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:15.528 [2024-09-28 16:08:30.059757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.528 [2024-09-28 16:08:30.062155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.528 [2024-09-28 16:08:30.062191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:15.528 [2024-09-28 16:08:30.063794] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 36a856a7-33f1-4d0f-b72e-7ee7bb237cf3 00:07:15.528 [2024-09-28 16:08:30.063912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 36a856a7-33f1-4d0f-b72e-7ee7bb237cf3 is claimed 00:07:15.528 [2024-09-28 16:08:30.064037] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a1288655-24e4-4669-8027-16fe8e88b6bd 00:07:15.528 [2024-09-28 16:08:30.064056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev a1288655-24e4-4669-8027-16fe8e88b6bd is claimed 00:07:15.528 [2024-09-28 16:08:30.064196] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev a1288655-24e4-4669-8027-16fe8e88b6bd (2) smaller than existing raid bdev Raid (3) 00:07:15.528 [2024-09-28 16:08:30.064218] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 36a856a7-33f1-4d0f-b72e-7ee7bb237cf3: File exists 00:07:15.528 [2024-09-28 16:08:30.064271] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:15.528 [2024-09-28 16:08:30.064284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:15.528 [2024-09-28 16:08:30.064538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:15.528 pt0 00:07:15.528 [2024-09-28 16:08:30.064687] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:15.528 [2024-09-28 16:08:30.064695] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:15.528 [2024-09-28 16:08:30.064831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.528 [2024-09-28 16:08:30.088052] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60198 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60198 ']' 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60198 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60198 00:07:15.528 killing process with pid 60198 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60198' 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60198 00:07:15.528 [2024-09-28 16:08:30.162284] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.528 [2024-09-28 16:08:30.162347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.528 [2024-09-28 16:08:30.162391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.528 [2024-09-28 16:08:30.162400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:15.528 16:08:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60198 00:07:17.433 [2024-09-28 16:08:31.646866] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.369 16:08:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:18.369 00:07:18.369 real 0m4.969s 00:07:18.369 user 0m4.957s 00:07:18.369 sys 0m0.786s 00:07:18.369 16:08:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.369 ************************************ 00:07:18.369 END TEST raid1_resize_superblock_test 00:07:18.369 ************************************ 00:07:18.369 16:08:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.369 16:08:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:18.369 16:08:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:18.369 16:08:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:18.369 16:08:33 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:18.369 16:08:33 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:18.369 16:08:33 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:18.369 16:08:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.369 16:08:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.369 16:08:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.629 ************************************ 00:07:18.629 START TEST raid_function_test_raid0 00:07:18.629 ************************************ 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60306 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60306' 00:07:18.629 Process raid pid: 60306 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60306 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60306 ']' 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.629 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:18.629 [2024-09-28 16:08:33.147200] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:18.629 [2024-09-28 16:08:33.147427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.888 [2024-09-28 16:08:33.314400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.148 [2024-09-28 16:08:33.580662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.148 [2024-09-28 16:08:33.824556] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.148 [2024-09-28 16:08:33.824590] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.408 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.408 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:19.408 16:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:19.408 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.408 16:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.408 Base_1 00:07:19.408 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.408 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:19.408 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.408 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.668 Base_2 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.668 [2024-09-28 16:08:34.105887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:19.668 [2024-09-28 16:08:34.108076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:19.668 [2024-09-28 16:08:34.108145] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:19.668 [2024-09-28 16:08:34.108157] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.668 [2024-09-28 16:08:34.108461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.668 [2024-09-28 16:08:34.108615] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:19.668 [2024-09-28 16:08:34.108631] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:19.668 [2024-09-28 16:08:34.108801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:19.668 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:19.668 [2024-09-28 16:08:34.325477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:19.668 /dev/nbd0 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.928 1+0 records in 00:07:19.928 1+0 records out 00:07:19.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594286 s, 6.9 MB/s 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.928 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.929 { 00:07:19.929 "nbd_device": "/dev/nbd0", 00:07:19.929 "bdev_name": "raid" 00:07:19.929 } 00:07:19.929 ]' 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.929 { 00:07:19.929 "nbd_device": "/dev/nbd0", 00:07:19.929 "bdev_name": "raid" 00:07:19.929 } 00:07:19.929 ]' 00:07:19.929 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:20.189 4096+0 records in 00:07:20.189 4096+0 records out 00:07:20.189 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0272333 s, 77.0 MB/s 00:07:20.189 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:20.449 4096+0 records in 00:07:20.449 4096+0 records out 00:07:20.449 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.196269 s, 10.7 MB/s 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:20.449 128+0 records in 00:07:20.449 128+0 records out 00:07:20.449 65536 bytes (66 kB, 64 KiB) copied, 0.00035029 s, 187 MB/s 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:20.449 2035+0 records in 00:07:20.449 2035+0 records out 00:07:20.449 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00886964 s, 117 MB/s 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:20.449 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.450 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.450 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.450 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.450 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:20.450 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:20.450 16:08:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:20.450 456+0 records in 00:07:20.450 456+0 records out 00:07:20.450 233472 bytes (233 kB, 228 KiB) copied, 0.00382348 s, 61.1 MB/s 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.450 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.710 [2024-09-28 16:08:35.237241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.710 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60306 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60306 ']' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60306 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60306 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.970 killing process with pid 60306 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60306' 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60306 00:07:20.970 [2024-09-28 16:08:35.553058] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.970 [2024-09-28 16:08:35.553178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.970 16:08:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60306 00:07:20.970 [2024-09-28 16:08:35.553246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.970 [2024-09-28 16:08:35.553260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:21.230 [2024-09-28 16:08:35.769810] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.611 16:08:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:22.611 00:07:22.611 real 0m4.045s 00:07:22.611 user 0m4.494s 00:07:22.611 sys 0m1.069s 00:07:22.611 16:08:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.611 16:08:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.611 ************************************ 00:07:22.611 END TEST raid_function_test_raid0 00:07:22.611 ************************************ 00:07:22.611 16:08:37 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:22.611 16:08:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.611 16:08:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.611 16:08:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.611 ************************************ 00:07:22.611 START TEST raid_function_test_concat 00:07:22.611 ************************************ 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60435 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60435' 00:07:22.611 Process raid pid: 60435 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60435 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60435 ']' 00:07:22.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.611 16:08:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.611 [2024-09-28 16:08:37.263018] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:22.611 [2024-09-28 16:08:37.263159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.871 [2024-09-28 16:08:37.432464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.146 [2024-09-28 16:08:37.679044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.423 [2024-09-28 16:08:37.917083] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.423 [2024-09-28 16:08:37.917210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.423 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.423 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:23.423 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:23.423 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.423 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:23.683 Base_1 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:23.683 Base_2 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:23.683 [2024-09-28 16:08:38.184356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:23.683 [2024-09-28 16:08:38.186450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:23.683 [2024-09-28 16:08:38.186524] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:23.683 [2024-09-28 16:08:38.186537] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:23.683 [2024-09-28 16:08:38.186807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.683 [2024-09-28 16:08:38.186969] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:23.683 [2024-09-28 16:08:38.186978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:23.683 [2024-09-28 16:08:38.187138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:23.683 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:23.943 [2024-09-28 16:08:38.431974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:23.943 /dev/nbd0 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.943 1+0 records in 00:07:23.943 1+0 records out 00:07:23.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043756 s, 9.4 MB/s 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.943 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.203 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.203 { 00:07:24.203 "nbd_device": "/dev/nbd0", 00:07:24.203 "bdev_name": "raid" 00:07:24.203 } 00:07:24.203 ]' 00:07:24.203 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.203 { 00:07:24.203 "nbd_device": "/dev/nbd0", 00:07:24.203 "bdev_name": "raid" 00:07:24.203 } 00:07:24.203 ]' 00:07:24.203 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.203 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:24.203 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:24.203 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.203 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:24.204 4096+0 records in 00:07:24.204 4096+0 records out 00:07:24.204 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0356942 s, 58.8 MB/s 00:07:24.204 16:08:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:24.464 4096+0 records in 00:07:24.464 4096+0 records out 00:07:24.464 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.209604 s, 10.0 MB/s 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:24.464 128+0 records in 00:07:24.464 128+0 records out 00:07:24.464 65536 bytes (66 kB, 64 KiB) copied, 0.00130665 s, 50.2 MB/s 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:24.464 2035+0 records in 00:07:24.464 2035+0 records out 00:07:24.464 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.014202 s, 73.4 MB/s 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:24.464 456+0 records in 00:07:24.464 456+0 records out 00:07:24.464 233472 bytes (233 kB, 228 KiB) copied, 0.00330861 s, 70.6 MB/s 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.464 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.724 [2024-09-28 16:08:39.355508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.724 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60435 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60435 ']' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60435 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60435 00:07:24.984 killing process with pid 60435 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60435' 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60435 00:07:24.984 [2024-09-28 16:08:39.659761] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.984 [2024-09-28 16:08:39.659878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.984 16:08:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60435 00:07:24.984 [2024-09-28 16:08:39.659928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.984 [2024-09-28 16:08:39.659940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:25.244 [2024-09-28 16:08:39.877695] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.626 16:08:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:26.626 00:07:26.626 real 0m4.033s 00:07:26.626 user 0m4.477s 00:07:26.626 sys 0m1.063s 00:07:26.626 16:08:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.626 16:08:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.626 ************************************ 00:07:26.626 END TEST raid_function_test_concat 00:07:26.626 ************************************ 00:07:26.626 16:08:41 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:26.626 16:08:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.626 16:08:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.626 16:08:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.626 ************************************ 00:07:26.626 START TEST raid0_resize_test 00:07:26.626 ************************************ 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60564 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60564' 00:07:26.626 Process raid pid: 60564 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60564 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60564 ']' 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.626 16:08:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.886 [2024-09-28 16:08:41.373820] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:26.886 [2024-09-28 16:08:41.374067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.886 [2024-09-28 16:08:41.543459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.146 [2024-09-28 16:08:41.807411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.406 [2024-09-28 16:08:42.045808] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.406 [2024-09-28 16:08:42.045920] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.666 Base_1 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.666 Base_2 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.666 [2024-09-28 16:08:42.233616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:27.666 [2024-09-28 16:08:42.235611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:27.666 [2024-09-28 16:08:42.235661] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:27.666 [2024-09-28 16:08:42.235671] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:27.666 [2024-09-28 16:08:42.235895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:27.666 [2024-09-28 16:08:42.236026] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:27.666 [2024-09-28 16:08:42.236039] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:27.666 [2024-09-28 16:08:42.236184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.666 [2024-09-28 16:08:42.245539] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.666 [2024-09-28 16:08:42.245566] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:27.666 true 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.666 [2024-09-28 16:08:42.261643] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.666 [2024-09-28 16:08:42.305466] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.666 [2024-09-28 16:08:42.305543] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:27.666 [2024-09-28 16:08:42.305623] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:27.666 true 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.666 [2024-09-28 16:08:42.321612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.666 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.925 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:27.925 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:27.925 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:27.925 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:27.925 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60564 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60564 ']' 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60564 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60564 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60564' 00:07:27.926 killing process with pid 60564 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60564 00:07:27.926 [2024-09-28 16:08:42.395935] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.926 16:08:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60564 00:07:27.926 [2024-09-28 16:08:42.396155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.926 [2024-09-28 16:08:42.396218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.926 [2024-09-28 16:08:42.396293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:27.926 [2024-09-28 16:08:42.415086] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.306 16:08:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:29.306 00:07:29.306 real 0m2.466s 00:07:29.306 user 0m2.490s 00:07:29.306 sys 0m0.422s 00:07:29.306 16:08:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.306 16:08:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.306 ************************************ 00:07:29.306 END TEST raid0_resize_test 00:07:29.306 ************************************ 00:07:29.306 16:08:43 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:29.306 16:08:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.306 16:08:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.306 16:08:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.306 ************************************ 00:07:29.306 START TEST raid1_resize_test 00:07:29.306 ************************************ 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:29.306 Process raid pid: 60626 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60626 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60626' 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60626 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60626 ']' 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.306 16:08:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.306 [2024-09-28 16:08:43.900180] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:29.306 [2024-09-28 16:08:43.900303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.565 [2024-09-28 16:08:44.062540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.825 [2024-09-28 16:08:44.308742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.085 [2024-09-28 16:08:44.541716] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.085 [2024-09-28 16:08:44.541757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.085 Base_1 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.085 Base_2 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.085 [2024-09-28 16:08:44.741487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:30.085 [2024-09-28 16:08:44.743610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:30.085 [2024-09-28 16:08:44.743681] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:30.085 [2024-09-28 16:08:44.743693] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:30.085 [2024-09-28 16:08:44.743981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:30.085 [2024-09-28 16:08:44.744131] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:30.085 [2024-09-28 16:08:44.744140] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:30.085 [2024-09-28 16:08:44.744329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.085 [2024-09-28 16:08:44.753433] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.085 [2024-09-28 16:08:44.753507] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:30.085 true 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.085 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.345 [2024-09-28 16:08:44.769554] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.345 [2024-09-28 16:08:44.805421] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.345 [2024-09-28 16:08:44.805498] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:30.345 [2024-09-28 16:08:44.805562] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:30.345 true 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.345 [2024-09-28 16:08:44.821539] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60626 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60626 ']' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60626 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60626 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60626' 00:07:30.345 killing process with pid 60626 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60626 00:07:30.345 [2024-09-28 16:08:44.896533] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.345 [2024-09-28 16:08:44.896692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.345 16:08:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60626 00:07:30.345 [2024-09-28 16:08:44.897304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.345 [2024-09-28 16:08:44.897369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:30.345 [2024-09-28 16:08:44.915442] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.725 16:08:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:31.725 00:07:31.725 real 0m2.422s 00:07:31.725 user 0m2.418s 00:07:31.725 sys 0m0.442s 00:07:31.725 16:08:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.725 ************************************ 00:07:31.725 END TEST raid1_resize_test 00:07:31.725 ************************************ 00:07:31.725 16:08:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.725 16:08:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:31.725 16:08:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:31.725 16:08:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:31.725 16:08:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:31.725 16:08:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.725 16:08:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.725 ************************************ 00:07:31.725 START TEST raid_state_function_test 00:07:31.725 ************************************ 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.725 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:31.726 Process raid pid: 60688 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60688 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60688' 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60688 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60688 ']' 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.726 16:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.726 [2024-09-28 16:08:46.396000] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:31.726 [2024-09-28 16:08:46.396207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.985 [2024-09-28 16:08:46.561383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.244 [2024-09-28 16:08:46.806661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.504 [2024-09-28 16:08:47.037677] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.504 [2024-09-28 16:08:47.037825] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.763 [2024-09-28 16:08:47.218111] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.763 [2024-09-28 16:08:47.218171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.763 [2024-09-28 16:08:47.218181] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.763 [2024-09-28 16:08:47.218190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.763 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.764 "name": "Existed_Raid", 00:07:32.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.764 "strip_size_kb": 64, 00:07:32.764 "state": "configuring", 00:07:32.764 "raid_level": "raid0", 00:07:32.764 "superblock": false, 00:07:32.764 "num_base_bdevs": 2, 00:07:32.764 "num_base_bdevs_discovered": 0, 00:07:32.764 "num_base_bdevs_operational": 2, 00:07:32.764 "base_bdevs_list": [ 00:07:32.764 { 00:07:32.764 "name": "BaseBdev1", 00:07:32.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.764 "is_configured": false, 00:07:32.764 "data_offset": 0, 00:07:32.764 "data_size": 0 00:07:32.764 }, 00:07:32.764 { 00:07:32.764 "name": "BaseBdev2", 00:07:32.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.764 "is_configured": false, 00:07:32.764 "data_offset": 0, 00:07:32.764 "data_size": 0 00:07:32.764 } 00:07:32.764 ] 00:07:32.764 }' 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.764 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.024 [2024-09-28 16:08:47.629320] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.024 [2024-09-28 16:08:47.629424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.024 [2024-09-28 16:08:47.641322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.024 [2024-09-28 16:08:47.641406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.024 [2024-09-28 16:08:47.641432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.024 [2024-09-28 16:08:47.641457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.024 [2024-09-28 16:08:47.704746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.024 BaseBdev1 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.024 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.284 [ 00:07:33.284 { 00:07:33.284 "name": "BaseBdev1", 00:07:33.284 "aliases": [ 00:07:33.284 "5e9eea6c-0908-4104-aa32-ab7864df3326" 00:07:33.284 ], 00:07:33.284 "product_name": "Malloc disk", 00:07:33.284 "block_size": 512, 00:07:33.284 "num_blocks": 65536, 00:07:33.284 "uuid": "5e9eea6c-0908-4104-aa32-ab7864df3326", 00:07:33.284 "assigned_rate_limits": { 00:07:33.284 "rw_ios_per_sec": 0, 00:07:33.284 "rw_mbytes_per_sec": 0, 00:07:33.284 "r_mbytes_per_sec": 0, 00:07:33.284 "w_mbytes_per_sec": 0 00:07:33.284 }, 00:07:33.284 "claimed": true, 00:07:33.284 "claim_type": "exclusive_write", 00:07:33.284 "zoned": false, 00:07:33.284 "supported_io_types": { 00:07:33.284 "read": true, 00:07:33.284 "write": true, 00:07:33.284 "unmap": true, 00:07:33.284 "flush": true, 00:07:33.284 "reset": true, 00:07:33.284 "nvme_admin": false, 00:07:33.284 "nvme_io": false, 00:07:33.284 "nvme_io_md": false, 00:07:33.284 "write_zeroes": true, 00:07:33.284 "zcopy": true, 00:07:33.284 "get_zone_info": false, 00:07:33.284 "zone_management": false, 00:07:33.284 "zone_append": false, 00:07:33.284 "compare": false, 00:07:33.284 "compare_and_write": false, 00:07:33.284 "abort": true, 00:07:33.284 "seek_hole": false, 00:07:33.284 "seek_data": false, 00:07:33.284 "copy": true, 00:07:33.284 "nvme_iov_md": false 00:07:33.284 }, 00:07:33.284 "memory_domains": [ 00:07:33.284 { 00:07:33.284 "dma_device_id": "system", 00:07:33.284 "dma_device_type": 1 00:07:33.284 }, 00:07:33.284 { 00:07:33.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.284 "dma_device_type": 2 00:07:33.284 } 00:07:33.284 ], 00:07:33.284 "driver_specific": {} 00:07:33.284 } 00:07:33.284 ] 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.284 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.284 "name": "Existed_Raid", 00:07:33.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.284 "strip_size_kb": 64, 00:07:33.284 "state": "configuring", 00:07:33.284 "raid_level": "raid0", 00:07:33.284 "superblock": false, 00:07:33.284 "num_base_bdevs": 2, 00:07:33.284 "num_base_bdevs_discovered": 1, 00:07:33.284 "num_base_bdevs_operational": 2, 00:07:33.284 "base_bdevs_list": [ 00:07:33.284 { 00:07:33.284 "name": "BaseBdev1", 00:07:33.284 "uuid": "5e9eea6c-0908-4104-aa32-ab7864df3326", 00:07:33.284 "is_configured": true, 00:07:33.284 "data_offset": 0, 00:07:33.284 "data_size": 65536 00:07:33.284 }, 00:07:33.284 { 00:07:33.284 "name": "BaseBdev2", 00:07:33.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.285 "is_configured": false, 00:07:33.285 "data_offset": 0, 00:07:33.285 "data_size": 0 00:07:33.285 } 00:07:33.285 ] 00:07:33.285 }' 00:07:33.285 16:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.285 16:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 [2024-09-28 16:08:48.164042] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.545 [2024-09-28 16:08:48.164082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 [2024-09-28 16:08:48.176062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.545 [2024-09-28 16:08:48.178125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.545 [2024-09-28 16:08:48.178215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.545 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.805 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.805 "name": "Existed_Raid", 00:07:33.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.805 "strip_size_kb": 64, 00:07:33.805 "state": "configuring", 00:07:33.805 "raid_level": "raid0", 00:07:33.805 "superblock": false, 00:07:33.805 "num_base_bdevs": 2, 00:07:33.805 "num_base_bdevs_discovered": 1, 00:07:33.805 "num_base_bdevs_operational": 2, 00:07:33.805 "base_bdevs_list": [ 00:07:33.805 { 00:07:33.805 "name": "BaseBdev1", 00:07:33.805 "uuid": "5e9eea6c-0908-4104-aa32-ab7864df3326", 00:07:33.805 "is_configured": true, 00:07:33.805 "data_offset": 0, 00:07:33.805 "data_size": 65536 00:07:33.805 }, 00:07:33.805 { 00:07:33.805 "name": "BaseBdev2", 00:07:33.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.805 "is_configured": false, 00:07:33.805 "data_offset": 0, 00:07:33.805 "data_size": 0 00:07:33.805 } 00:07:33.805 ] 00:07:33.805 }' 00:07:33.805 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.805 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.065 [2024-09-28 16:08:48.614870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.065 [2024-09-28 16:08:48.615000] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:34.065 [2024-09-28 16:08:48.615028] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:34.065 [2024-09-28 16:08:48.615395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:34.065 [2024-09-28 16:08:48.615617] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:34.065 [2024-09-28 16:08:48.615667] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:34.065 [2024-09-28 16:08:48.616005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.065 BaseBdev2 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.065 [ 00:07:34.065 { 00:07:34.065 "name": "BaseBdev2", 00:07:34.065 "aliases": [ 00:07:34.065 "9a7237ef-0185-4dc6-bc90-f5d5f14ea34b" 00:07:34.065 ], 00:07:34.065 "product_name": "Malloc disk", 00:07:34.065 "block_size": 512, 00:07:34.065 "num_blocks": 65536, 00:07:34.065 "uuid": "9a7237ef-0185-4dc6-bc90-f5d5f14ea34b", 00:07:34.065 "assigned_rate_limits": { 00:07:34.065 "rw_ios_per_sec": 0, 00:07:34.065 "rw_mbytes_per_sec": 0, 00:07:34.065 "r_mbytes_per_sec": 0, 00:07:34.065 "w_mbytes_per_sec": 0 00:07:34.065 }, 00:07:34.065 "claimed": true, 00:07:34.065 "claim_type": "exclusive_write", 00:07:34.065 "zoned": false, 00:07:34.065 "supported_io_types": { 00:07:34.065 "read": true, 00:07:34.065 "write": true, 00:07:34.065 "unmap": true, 00:07:34.065 "flush": true, 00:07:34.065 "reset": true, 00:07:34.065 "nvme_admin": false, 00:07:34.065 "nvme_io": false, 00:07:34.065 "nvme_io_md": false, 00:07:34.065 "write_zeroes": true, 00:07:34.065 "zcopy": true, 00:07:34.065 "get_zone_info": false, 00:07:34.065 "zone_management": false, 00:07:34.065 "zone_append": false, 00:07:34.065 "compare": false, 00:07:34.065 "compare_and_write": false, 00:07:34.065 "abort": true, 00:07:34.065 "seek_hole": false, 00:07:34.065 "seek_data": false, 00:07:34.065 "copy": true, 00:07:34.065 "nvme_iov_md": false 00:07:34.065 }, 00:07:34.065 "memory_domains": [ 00:07:34.065 { 00:07:34.065 "dma_device_id": "system", 00:07:34.065 "dma_device_type": 1 00:07:34.065 }, 00:07:34.065 { 00:07:34.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.065 "dma_device_type": 2 00:07:34.065 } 00:07:34.065 ], 00:07:34.065 "driver_specific": {} 00:07:34.065 } 00:07:34.065 ] 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.065 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.065 "name": "Existed_Raid", 00:07:34.065 "uuid": "0105b03f-6947-4a16-af8c-f786728903b7", 00:07:34.065 "strip_size_kb": 64, 00:07:34.065 "state": "online", 00:07:34.065 "raid_level": "raid0", 00:07:34.065 "superblock": false, 00:07:34.065 "num_base_bdevs": 2, 00:07:34.065 "num_base_bdevs_discovered": 2, 00:07:34.065 "num_base_bdevs_operational": 2, 00:07:34.065 "base_bdevs_list": [ 00:07:34.065 { 00:07:34.065 "name": "BaseBdev1", 00:07:34.065 "uuid": "5e9eea6c-0908-4104-aa32-ab7864df3326", 00:07:34.065 "is_configured": true, 00:07:34.065 "data_offset": 0, 00:07:34.065 "data_size": 65536 00:07:34.065 }, 00:07:34.065 { 00:07:34.065 "name": "BaseBdev2", 00:07:34.065 "uuid": "9a7237ef-0185-4dc6-bc90-f5d5f14ea34b", 00:07:34.065 "is_configured": true, 00:07:34.066 "data_offset": 0, 00:07:34.066 "data_size": 65536 00:07:34.066 } 00:07:34.066 ] 00:07:34.066 }' 00:07:34.066 16:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.066 16:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.635 [2024-09-28 16:08:49.110287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.635 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.635 "name": "Existed_Raid", 00:07:34.635 "aliases": [ 00:07:34.635 "0105b03f-6947-4a16-af8c-f786728903b7" 00:07:34.635 ], 00:07:34.635 "product_name": "Raid Volume", 00:07:34.635 "block_size": 512, 00:07:34.635 "num_blocks": 131072, 00:07:34.635 "uuid": "0105b03f-6947-4a16-af8c-f786728903b7", 00:07:34.635 "assigned_rate_limits": { 00:07:34.635 "rw_ios_per_sec": 0, 00:07:34.635 "rw_mbytes_per_sec": 0, 00:07:34.635 "r_mbytes_per_sec": 0, 00:07:34.635 "w_mbytes_per_sec": 0 00:07:34.635 }, 00:07:34.635 "claimed": false, 00:07:34.635 "zoned": false, 00:07:34.635 "supported_io_types": { 00:07:34.635 "read": true, 00:07:34.635 "write": true, 00:07:34.635 "unmap": true, 00:07:34.635 "flush": true, 00:07:34.635 "reset": true, 00:07:34.635 "nvme_admin": false, 00:07:34.635 "nvme_io": false, 00:07:34.635 "nvme_io_md": false, 00:07:34.635 "write_zeroes": true, 00:07:34.635 "zcopy": false, 00:07:34.635 "get_zone_info": false, 00:07:34.635 "zone_management": false, 00:07:34.635 "zone_append": false, 00:07:34.635 "compare": false, 00:07:34.635 "compare_and_write": false, 00:07:34.635 "abort": false, 00:07:34.635 "seek_hole": false, 00:07:34.635 "seek_data": false, 00:07:34.635 "copy": false, 00:07:34.635 "nvme_iov_md": false 00:07:34.635 }, 00:07:34.635 "memory_domains": [ 00:07:34.635 { 00:07:34.635 "dma_device_id": "system", 00:07:34.635 "dma_device_type": 1 00:07:34.635 }, 00:07:34.635 { 00:07:34.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.635 "dma_device_type": 2 00:07:34.635 }, 00:07:34.635 { 00:07:34.635 "dma_device_id": "system", 00:07:34.635 "dma_device_type": 1 00:07:34.635 }, 00:07:34.635 { 00:07:34.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.635 "dma_device_type": 2 00:07:34.635 } 00:07:34.636 ], 00:07:34.636 "driver_specific": { 00:07:34.636 "raid": { 00:07:34.636 "uuid": "0105b03f-6947-4a16-af8c-f786728903b7", 00:07:34.636 "strip_size_kb": 64, 00:07:34.636 "state": "online", 00:07:34.636 "raid_level": "raid0", 00:07:34.636 "superblock": false, 00:07:34.636 "num_base_bdevs": 2, 00:07:34.636 "num_base_bdevs_discovered": 2, 00:07:34.636 "num_base_bdevs_operational": 2, 00:07:34.636 "base_bdevs_list": [ 00:07:34.636 { 00:07:34.636 "name": "BaseBdev1", 00:07:34.636 "uuid": "5e9eea6c-0908-4104-aa32-ab7864df3326", 00:07:34.636 "is_configured": true, 00:07:34.636 "data_offset": 0, 00:07:34.636 "data_size": 65536 00:07:34.636 }, 00:07:34.636 { 00:07:34.636 "name": "BaseBdev2", 00:07:34.636 "uuid": "9a7237ef-0185-4dc6-bc90-f5d5f14ea34b", 00:07:34.636 "is_configured": true, 00:07:34.636 "data_offset": 0, 00:07:34.636 "data_size": 65536 00:07:34.636 } 00:07:34.636 ] 00:07:34.636 } 00:07:34.636 } 00:07:34.636 }' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.636 BaseBdev2' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.636 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.636 [2024-09-28 16:08:49.309728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.636 [2024-09-28 16:08:49.309759] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.636 [2024-09-28 16:08:49.309811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.895 "name": "Existed_Raid", 00:07:34.895 "uuid": "0105b03f-6947-4a16-af8c-f786728903b7", 00:07:34.895 "strip_size_kb": 64, 00:07:34.895 "state": "offline", 00:07:34.895 "raid_level": "raid0", 00:07:34.895 "superblock": false, 00:07:34.895 "num_base_bdevs": 2, 00:07:34.895 "num_base_bdevs_discovered": 1, 00:07:34.895 "num_base_bdevs_operational": 1, 00:07:34.895 "base_bdevs_list": [ 00:07:34.895 { 00:07:34.895 "name": null, 00:07:34.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.895 "is_configured": false, 00:07:34.895 "data_offset": 0, 00:07:34.895 "data_size": 65536 00:07:34.895 }, 00:07:34.895 { 00:07:34.895 "name": "BaseBdev2", 00:07:34.895 "uuid": "9a7237ef-0185-4dc6-bc90-f5d5f14ea34b", 00:07:34.895 "is_configured": true, 00:07:34.895 "data_offset": 0, 00:07:34.895 "data_size": 65536 00:07:34.895 } 00:07:34.895 ] 00:07:34.895 }' 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.895 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.155 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.155 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.155 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.155 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.155 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.155 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.415 [2024-09-28 16:08:49.880781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.415 [2024-09-28 16:08:49.880908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.415 16:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60688 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60688 ']' 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60688 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60688 00:07:35.415 killing process with pid 60688 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60688' 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60688 00:07:35.415 [2024-09-28 16:08:50.077468] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.415 16:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60688 00:07:35.415 [2024-09-28 16:08:50.095445] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.797 ************************************ 00:07:36.797 END TEST raid_state_function_test 00:07:36.797 16:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.797 00:07:36.797 real 0m5.108s 00:07:36.797 user 0m7.123s 00:07:36.797 sys 0m0.863s 00:07:36.797 16:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.797 16:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.797 ************************************ 00:07:36.797 16:08:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:36.797 16:08:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:36.797 16:08:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.797 16:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.057 ************************************ 00:07:37.057 START TEST raid_state_function_test_sb 00:07:37.057 ************************************ 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60936 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60936' 00:07:37.057 Process raid pid: 60936 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60936 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60936 ']' 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.057 16:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.057 [2024-09-28 16:08:51.586435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:37.057 [2024-09-28 16:08:51.586543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.316 [2024-09-28 16:08:51.751514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.316 [2024-09-28 16:08:51.991779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.575 [2024-09-28 16:08:52.222111] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.575 [2024-09-28 16:08:52.222155] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.839 [2024-09-28 16:08:52.412209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.839 [2024-09-28 16:08:52.412274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.839 [2024-09-28 16:08:52.412284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.839 [2024-09-28 16:08:52.412294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.839 "name": "Existed_Raid", 00:07:37.839 "uuid": "7ef7d16b-6cac-4881-ae80-2ba627b5a5ec", 00:07:37.839 "strip_size_kb": 64, 00:07:37.839 "state": "configuring", 00:07:37.839 "raid_level": "raid0", 00:07:37.839 "superblock": true, 00:07:37.839 "num_base_bdevs": 2, 00:07:37.839 "num_base_bdevs_discovered": 0, 00:07:37.839 "num_base_bdevs_operational": 2, 00:07:37.839 "base_bdevs_list": [ 00:07:37.839 { 00:07:37.839 "name": "BaseBdev1", 00:07:37.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.839 "is_configured": false, 00:07:37.839 "data_offset": 0, 00:07:37.839 "data_size": 0 00:07:37.839 }, 00:07:37.839 { 00:07:37.839 "name": "BaseBdev2", 00:07:37.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.839 "is_configured": false, 00:07:37.839 "data_offset": 0, 00:07:37.839 "data_size": 0 00:07:37.839 } 00:07:37.839 ] 00:07:37.839 }' 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.839 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.433 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.433 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.433 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.433 [2024-09-28 16:08:52.899250] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.433 [2024-09-28 16:08:52.899290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:38.433 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.433 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.433 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.433 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.433 [2024-09-28 16:08:52.907267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.433 [2024-09-28 16:08:52.907302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.433 [2024-09-28 16:08:52.907311] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.434 [2024-09-28 16:08:52.907323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 [2024-09-28 16:08:52.971381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.434 BaseBdev1 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.434 16:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 [ 00:07:38.434 { 00:07:38.434 "name": "BaseBdev1", 00:07:38.434 "aliases": [ 00:07:38.434 "8e0e40bd-4477-46d9-9740-843b5ef3139d" 00:07:38.434 ], 00:07:38.434 "product_name": "Malloc disk", 00:07:38.434 "block_size": 512, 00:07:38.434 "num_blocks": 65536, 00:07:38.434 "uuid": "8e0e40bd-4477-46d9-9740-843b5ef3139d", 00:07:38.434 "assigned_rate_limits": { 00:07:38.434 "rw_ios_per_sec": 0, 00:07:38.434 "rw_mbytes_per_sec": 0, 00:07:38.434 "r_mbytes_per_sec": 0, 00:07:38.434 "w_mbytes_per_sec": 0 00:07:38.434 }, 00:07:38.434 "claimed": true, 00:07:38.434 "claim_type": "exclusive_write", 00:07:38.434 "zoned": false, 00:07:38.434 "supported_io_types": { 00:07:38.434 "read": true, 00:07:38.434 "write": true, 00:07:38.434 "unmap": true, 00:07:38.434 "flush": true, 00:07:38.434 "reset": true, 00:07:38.434 "nvme_admin": false, 00:07:38.434 "nvme_io": false, 00:07:38.434 "nvme_io_md": false, 00:07:38.434 "write_zeroes": true, 00:07:38.434 "zcopy": true, 00:07:38.434 "get_zone_info": false, 00:07:38.434 "zone_management": false, 00:07:38.434 "zone_append": false, 00:07:38.434 "compare": false, 00:07:38.434 "compare_and_write": false, 00:07:38.434 "abort": true, 00:07:38.434 "seek_hole": false, 00:07:38.434 "seek_data": false, 00:07:38.434 "copy": true, 00:07:38.434 "nvme_iov_md": false 00:07:38.434 }, 00:07:38.434 "memory_domains": [ 00:07:38.434 { 00:07:38.434 "dma_device_id": "system", 00:07:38.434 "dma_device_type": 1 00:07:38.434 }, 00:07:38.434 { 00:07:38.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.434 "dma_device_type": 2 00:07:38.434 } 00:07:38.434 ], 00:07:38.434 "driver_specific": {} 00:07:38.434 } 00:07:38.434 ] 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.434 "name": "Existed_Raid", 00:07:38.434 "uuid": "d929e893-cc37-46d8-b637-793ce0c00f1d", 00:07:38.434 "strip_size_kb": 64, 00:07:38.434 "state": "configuring", 00:07:38.434 "raid_level": "raid0", 00:07:38.434 "superblock": true, 00:07:38.434 "num_base_bdevs": 2, 00:07:38.434 "num_base_bdevs_discovered": 1, 00:07:38.434 "num_base_bdevs_operational": 2, 00:07:38.434 "base_bdevs_list": [ 00:07:38.434 { 00:07:38.434 "name": "BaseBdev1", 00:07:38.434 "uuid": "8e0e40bd-4477-46d9-9740-843b5ef3139d", 00:07:38.434 "is_configured": true, 00:07:38.434 "data_offset": 2048, 00:07:38.434 "data_size": 63488 00:07:38.434 }, 00:07:38.434 { 00:07:38.434 "name": "BaseBdev2", 00:07:38.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.434 "is_configured": false, 00:07:38.434 "data_offset": 0, 00:07:38.434 "data_size": 0 00:07:38.434 } 00:07:38.434 ] 00:07:38.434 }' 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.434 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.005 [2024-09-28 16:08:53.474611] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.005 [2024-09-28 16:08:53.474656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.005 [2024-09-28 16:08:53.486632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.005 [2024-09-28 16:08:53.488660] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.005 [2024-09-28 16:08:53.488699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.005 "name": "Existed_Raid", 00:07:39.005 "uuid": "6faf6289-c3c5-42da-b675-d8e819308ca7", 00:07:39.005 "strip_size_kb": 64, 00:07:39.005 "state": "configuring", 00:07:39.005 "raid_level": "raid0", 00:07:39.005 "superblock": true, 00:07:39.005 "num_base_bdevs": 2, 00:07:39.005 "num_base_bdevs_discovered": 1, 00:07:39.005 "num_base_bdevs_operational": 2, 00:07:39.005 "base_bdevs_list": [ 00:07:39.005 { 00:07:39.005 "name": "BaseBdev1", 00:07:39.005 "uuid": "8e0e40bd-4477-46d9-9740-843b5ef3139d", 00:07:39.005 "is_configured": true, 00:07:39.005 "data_offset": 2048, 00:07:39.005 "data_size": 63488 00:07:39.005 }, 00:07:39.005 { 00:07:39.005 "name": "BaseBdev2", 00:07:39.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.005 "is_configured": false, 00:07:39.005 "data_offset": 0, 00:07:39.005 "data_size": 0 00:07:39.005 } 00:07:39.005 ] 00:07:39.005 }' 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.005 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.265 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.265 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.265 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.525 [2024-09-28 16:08:53.964922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.525 [2024-09-28 16:08:53.965205] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.525 [2024-09-28 16:08:53.965257] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.525 BaseBdev2 00:07:39.525 [2024-09-28 16:08:53.965558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.525 [2024-09-28 16:08:53.965720] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.525 [2024-09-28 16:08:53.965734] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:39.525 [2024-09-28 16:08:53.965883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:39.525 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:39.526 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.526 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.526 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.526 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.526 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.526 16:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.526 [ 00:07:39.526 { 00:07:39.526 "name": "BaseBdev2", 00:07:39.526 "aliases": [ 00:07:39.526 "4d93bf5b-2c3d-4103-90a1-1b2789fc0871" 00:07:39.526 ], 00:07:39.526 "product_name": "Malloc disk", 00:07:39.526 "block_size": 512, 00:07:39.526 "num_blocks": 65536, 00:07:39.526 "uuid": "4d93bf5b-2c3d-4103-90a1-1b2789fc0871", 00:07:39.526 "assigned_rate_limits": { 00:07:39.526 "rw_ios_per_sec": 0, 00:07:39.526 "rw_mbytes_per_sec": 0, 00:07:39.526 "r_mbytes_per_sec": 0, 00:07:39.526 "w_mbytes_per_sec": 0 00:07:39.526 }, 00:07:39.526 "claimed": true, 00:07:39.526 "claim_type": "exclusive_write", 00:07:39.526 "zoned": false, 00:07:39.526 "supported_io_types": { 00:07:39.526 "read": true, 00:07:39.526 "write": true, 00:07:39.526 "unmap": true, 00:07:39.526 "flush": true, 00:07:39.526 "reset": true, 00:07:39.526 "nvme_admin": false, 00:07:39.526 "nvme_io": false, 00:07:39.526 "nvme_io_md": false, 00:07:39.526 "write_zeroes": true, 00:07:39.526 "zcopy": true, 00:07:39.526 "get_zone_info": false, 00:07:39.526 "zone_management": false, 00:07:39.526 "zone_append": false, 00:07:39.526 "compare": false, 00:07:39.526 "compare_and_write": false, 00:07:39.526 "abort": true, 00:07:39.526 "seek_hole": false, 00:07:39.526 "seek_data": false, 00:07:39.526 "copy": true, 00:07:39.526 "nvme_iov_md": false 00:07:39.526 }, 00:07:39.526 "memory_domains": [ 00:07:39.526 { 00:07:39.526 "dma_device_id": "system", 00:07:39.526 "dma_device_type": 1 00:07:39.526 }, 00:07:39.526 { 00:07:39.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.526 "dma_device_type": 2 00:07:39.526 } 00:07:39.526 ], 00:07:39.526 "driver_specific": {} 00:07:39.526 } 00:07:39.526 ] 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.526 "name": "Existed_Raid", 00:07:39.526 "uuid": "6faf6289-c3c5-42da-b675-d8e819308ca7", 00:07:39.526 "strip_size_kb": 64, 00:07:39.526 "state": "online", 00:07:39.526 "raid_level": "raid0", 00:07:39.526 "superblock": true, 00:07:39.526 "num_base_bdevs": 2, 00:07:39.526 "num_base_bdevs_discovered": 2, 00:07:39.526 "num_base_bdevs_operational": 2, 00:07:39.526 "base_bdevs_list": [ 00:07:39.526 { 00:07:39.526 "name": "BaseBdev1", 00:07:39.526 "uuid": "8e0e40bd-4477-46d9-9740-843b5ef3139d", 00:07:39.526 "is_configured": true, 00:07:39.526 "data_offset": 2048, 00:07:39.526 "data_size": 63488 00:07:39.526 }, 00:07:39.526 { 00:07:39.526 "name": "BaseBdev2", 00:07:39.526 "uuid": "4d93bf5b-2c3d-4103-90a1-1b2789fc0871", 00:07:39.526 "is_configured": true, 00:07:39.526 "data_offset": 2048, 00:07:39.526 "data_size": 63488 00:07:39.526 } 00:07:39.526 ] 00:07:39.526 }' 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.526 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.786 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.786 [2024-09-28 16:08:54.460448] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.049 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.049 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.049 "name": "Existed_Raid", 00:07:40.049 "aliases": [ 00:07:40.049 "6faf6289-c3c5-42da-b675-d8e819308ca7" 00:07:40.049 ], 00:07:40.049 "product_name": "Raid Volume", 00:07:40.049 "block_size": 512, 00:07:40.049 "num_blocks": 126976, 00:07:40.049 "uuid": "6faf6289-c3c5-42da-b675-d8e819308ca7", 00:07:40.049 "assigned_rate_limits": { 00:07:40.049 "rw_ios_per_sec": 0, 00:07:40.049 "rw_mbytes_per_sec": 0, 00:07:40.049 "r_mbytes_per_sec": 0, 00:07:40.049 "w_mbytes_per_sec": 0 00:07:40.049 }, 00:07:40.049 "claimed": false, 00:07:40.049 "zoned": false, 00:07:40.049 "supported_io_types": { 00:07:40.049 "read": true, 00:07:40.049 "write": true, 00:07:40.049 "unmap": true, 00:07:40.049 "flush": true, 00:07:40.049 "reset": true, 00:07:40.049 "nvme_admin": false, 00:07:40.049 "nvme_io": false, 00:07:40.049 "nvme_io_md": false, 00:07:40.049 "write_zeroes": true, 00:07:40.049 "zcopy": false, 00:07:40.049 "get_zone_info": false, 00:07:40.049 "zone_management": false, 00:07:40.049 "zone_append": false, 00:07:40.049 "compare": false, 00:07:40.049 "compare_and_write": false, 00:07:40.049 "abort": false, 00:07:40.049 "seek_hole": false, 00:07:40.049 "seek_data": false, 00:07:40.049 "copy": false, 00:07:40.049 "nvme_iov_md": false 00:07:40.049 }, 00:07:40.049 "memory_domains": [ 00:07:40.049 { 00:07:40.049 "dma_device_id": "system", 00:07:40.049 "dma_device_type": 1 00:07:40.049 }, 00:07:40.049 { 00:07:40.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.049 "dma_device_type": 2 00:07:40.049 }, 00:07:40.049 { 00:07:40.049 "dma_device_id": "system", 00:07:40.049 "dma_device_type": 1 00:07:40.050 }, 00:07:40.050 { 00:07:40.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.050 "dma_device_type": 2 00:07:40.050 } 00:07:40.050 ], 00:07:40.050 "driver_specific": { 00:07:40.050 "raid": { 00:07:40.050 "uuid": "6faf6289-c3c5-42da-b675-d8e819308ca7", 00:07:40.050 "strip_size_kb": 64, 00:07:40.050 "state": "online", 00:07:40.050 "raid_level": "raid0", 00:07:40.050 "superblock": true, 00:07:40.050 "num_base_bdevs": 2, 00:07:40.050 "num_base_bdevs_discovered": 2, 00:07:40.050 "num_base_bdevs_operational": 2, 00:07:40.050 "base_bdevs_list": [ 00:07:40.050 { 00:07:40.050 "name": "BaseBdev1", 00:07:40.050 "uuid": "8e0e40bd-4477-46d9-9740-843b5ef3139d", 00:07:40.050 "is_configured": true, 00:07:40.050 "data_offset": 2048, 00:07:40.050 "data_size": 63488 00:07:40.050 }, 00:07:40.050 { 00:07:40.050 "name": "BaseBdev2", 00:07:40.050 "uuid": "4d93bf5b-2c3d-4103-90a1-1b2789fc0871", 00:07:40.050 "is_configured": true, 00:07:40.050 "data_offset": 2048, 00:07:40.050 "data_size": 63488 00:07:40.050 } 00:07:40.050 ] 00:07:40.050 } 00:07:40.050 } 00:07:40.050 }' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:40.050 BaseBdev2' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.050 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.050 [2024-09-28 16:08:54.647913] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:40.050 [2024-09-28 16:08:54.647943] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.050 [2024-09-28 16:08:54.647991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.310 "name": "Existed_Raid", 00:07:40.310 "uuid": "6faf6289-c3c5-42da-b675-d8e819308ca7", 00:07:40.310 "strip_size_kb": 64, 00:07:40.310 "state": "offline", 00:07:40.310 "raid_level": "raid0", 00:07:40.310 "superblock": true, 00:07:40.310 "num_base_bdevs": 2, 00:07:40.310 "num_base_bdevs_discovered": 1, 00:07:40.310 "num_base_bdevs_operational": 1, 00:07:40.310 "base_bdevs_list": [ 00:07:40.310 { 00:07:40.310 "name": null, 00:07:40.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.310 "is_configured": false, 00:07:40.310 "data_offset": 0, 00:07:40.310 "data_size": 63488 00:07:40.310 }, 00:07:40.310 { 00:07:40.310 "name": "BaseBdev2", 00:07:40.310 "uuid": "4d93bf5b-2c3d-4103-90a1-1b2789fc0871", 00:07:40.310 "is_configured": true, 00:07:40.310 "data_offset": 2048, 00:07:40.310 "data_size": 63488 00:07:40.310 } 00:07:40.310 ] 00:07:40.310 }' 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.310 16:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.570 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.570 [2024-09-28 16:08:55.196746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.570 [2024-09-28 16:08:55.196807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60936 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60936 ']' 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60936 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60936 00:07:40.829 killing process with pid 60936 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60936' 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60936 00:07:40.829 [2024-09-28 16:08:55.387019] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.829 16:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60936 00:07:40.829 [2024-09-28 16:08:55.404975] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.216 16:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:42.216 00:07:42.216 real 0m5.235s 00:07:42.216 user 0m7.327s 00:07:42.216 sys 0m0.900s 00:07:42.216 16:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.216 ************************************ 00:07:42.216 END TEST raid_state_function_test_sb 00:07:42.216 16:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.216 ************************************ 00:07:42.216 16:08:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:42.216 16:08:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:42.216 16:08:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.216 16:08:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.216 ************************************ 00:07:42.216 START TEST raid_superblock_test 00:07:42.216 ************************************ 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61188 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61188 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61188 ']' 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.216 16:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.216 [2024-09-28 16:08:56.885436] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:42.216 [2024-09-28 16:08:56.885548] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61188 ] 00:07:42.476 [2024-09-28 16:08:57.044765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.736 [2024-09-28 16:08:57.282183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.996 [2024-09-28 16:08:57.510525] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.996 [2024-09-28 16:08:57.510567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 malloc1 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 [2024-09-28 16:08:57.764630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:43.257 [2024-09-28 16:08:57.764713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.257 [2024-09-28 16:08:57.764739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:43.257 [2024-09-28 16:08:57.764752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.257 [2024-09-28 16:08:57.767099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.257 [2024-09-28 16:08:57.767138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:43.257 pt1 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 malloc2 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 [2024-09-28 16:08:57.853587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:43.257 [2024-09-28 16:08:57.853659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.257 [2024-09-28 16:08:57.853682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:43.257 [2024-09-28 16:08:57.853692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.257 [2024-09-28 16:08:57.855900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.257 [2024-09-28 16:08:57.855939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:43.257 pt2 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 [2024-09-28 16:08:57.865639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:43.257 [2024-09-28 16:08:57.867582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:43.257 [2024-09-28 16:08:57.867765] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.257 [2024-09-28 16:08:57.867785] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.257 [2024-09-28 16:08:57.868024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:43.257 [2024-09-28 16:08:57.868190] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.257 [2024-09-28 16:08:57.868209] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:43.257 [2024-09-28 16:08:57.868365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.257 "name": "raid_bdev1", 00:07:43.257 "uuid": "896ce436-09de-4f20-9af5-11fdb853cef1", 00:07:43.257 "strip_size_kb": 64, 00:07:43.257 "state": "online", 00:07:43.257 "raid_level": "raid0", 00:07:43.257 "superblock": true, 00:07:43.257 "num_base_bdevs": 2, 00:07:43.257 "num_base_bdevs_discovered": 2, 00:07:43.257 "num_base_bdevs_operational": 2, 00:07:43.257 "base_bdevs_list": [ 00:07:43.257 { 00:07:43.257 "name": "pt1", 00:07:43.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.257 "is_configured": true, 00:07:43.257 "data_offset": 2048, 00:07:43.257 "data_size": 63488 00:07:43.257 }, 00:07:43.257 { 00:07:43.257 "name": "pt2", 00:07:43.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.257 "is_configured": true, 00:07:43.257 "data_offset": 2048, 00:07:43.257 "data_size": 63488 00:07:43.257 } 00:07:43.257 ] 00:07:43.257 }' 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.257 16:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.826 [2024-09-28 16:08:58.325031] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.826 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.826 "name": "raid_bdev1", 00:07:43.826 "aliases": [ 00:07:43.826 "896ce436-09de-4f20-9af5-11fdb853cef1" 00:07:43.826 ], 00:07:43.826 "product_name": "Raid Volume", 00:07:43.826 "block_size": 512, 00:07:43.826 "num_blocks": 126976, 00:07:43.826 "uuid": "896ce436-09de-4f20-9af5-11fdb853cef1", 00:07:43.826 "assigned_rate_limits": { 00:07:43.826 "rw_ios_per_sec": 0, 00:07:43.826 "rw_mbytes_per_sec": 0, 00:07:43.826 "r_mbytes_per_sec": 0, 00:07:43.826 "w_mbytes_per_sec": 0 00:07:43.826 }, 00:07:43.826 "claimed": false, 00:07:43.826 "zoned": false, 00:07:43.826 "supported_io_types": { 00:07:43.826 "read": true, 00:07:43.826 "write": true, 00:07:43.826 "unmap": true, 00:07:43.826 "flush": true, 00:07:43.826 "reset": true, 00:07:43.826 "nvme_admin": false, 00:07:43.826 "nvme_io": false, 00:07:43.826 "nvme_io_md": false, 00:07:43.826 "write_zeroes": true, 00:07:43.826 "zcopy": false, 00:07:43.826 "get_zone_info": false, 00:07:43.826 "zone_management": false, 00:07:43.826 "zone_append": false, 00:07:43.826 "compare": false, 00:07:43.826 "compare_and_write": false, 00:07:43.826 "abort": false, 00:07:43.826 "seek_hole": false, 00:07:43.826 "seek_data": false, 00:07:43.826 "copy": false, 00:07:43.826 "nvme_iov_md": false 00:07:43.826 }, 00:07:43.826 "memory_domains": [ 00:07:43.826 { 00:07:43.826 "dma_device_id": "system", 00:07:43.826 "dma_device_type": 1 00:07:43.826 }, 00:07:43.826 { 00:07:43.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.826 "dma_device_type": 2 00:07:43.826 }, 00:07:43.826 { 00:07:43.826 "dma_device_id": "system", 00:07:43.826 "dma_device_type": 1 00:07:43.826 }, 00:07:43.826 { 00:07:43.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.826 "dma_device_type": 2 00:07:43.826 } 00:07:43.826 ], 00:07:43.826 "driver_specific": { 00:07:43.826 "raid": { 00:07:43.826 "uuid": "896ce436-09de-4f20-9af5-11fdb853cef1", 00:07:43.826 "strip_size_kb": 64, 00:07:43.826 "state": "online", 00:07:43.826 "raid_level": "raid0", 00:07:43.826 "superblock": true, 00:07:43.826 "num_base_bdevs": 2, 00:07:43.826 "num_base_bdevs_discovered": 2, 00:07:43.826 "num_base_bdevs_operational": 2, 00:07:43.826 "base_bdevs_list": [ 00:07:43.826 { 00:07:43.826 "name": "pt1", 00:07:43.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.826 "is_configured": true, 00:07:43.827 "data_offset": 2048, 00:07:43.827 "data_size": 63488 00:07:43.827 }, 00:07:43.827 { 00:07:43.827 "name": "pt2", 00:07:43.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.827 "is_configured": true, 00:07:43.827 "data_offset": 2048, 00:07:43.827 "data_size": 63488 00:07:43.827 } 00:07:43.827 ] 00:07:43.827 } 00:07:43.827 } 00:07:43.827 }' 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:43.827 pt2' 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.827 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.087 [2024-09-28 16:08:58.560570] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=896ce436-09de-4f20-9af5-11fdb853cef1 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 896ce436-09de-4f20-9af5-11fdb853cef1 ']' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.087 [2024-09-28 16:08:58.608297] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.087 [2024-09-28 16:08:58.608324] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.087 [2024-09-28 16:08:58.608413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.087 [2024-09-28 16:08:58.608458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.087 [2024-09-28 16:08:58.608471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.087 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.087 [2024-09-28 16:08:58.744049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:44.087 [2024-09-28 16:08:58.746154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:44.087 [2024-09-28 16:08:58.746234] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:44.087 [2024-09-28 16:08:58.746289] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:44.087 [2024-09-28 16:08:58.746305] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.087 [2024-09-28 16:08:58.746314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:44.087 request: 00:07:44.087 { 00:07:44.088 "name": "raid_bdev1", 00:07:44.088 "raid_level": "raid0", 00:07:44.088 "base_bdevs": [ 00:07:44.088 "malloc1", 00:07:44.088 "malloc2" 00:07:44.088 ], 00:07:44.088 "strip_size_kb": 64, 00:07:44.088 "superblock": false, 00:07:44.088 "method": "bdev_raid_create", 00:07:44.088 "req_id": 1 00:07:44.088 } 00:07:44.088 Got JSON-RPC error response 00:07:44.088 response: 00:07:44.088 { 00:07:44.088 "code": -17, 00:07:44.088 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:44.088 } 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.088 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.348 [2024-09-28 16:08:58.787951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.348 [2024-09-28 16:08:58.787999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.348 [2024-09-28 16:08:58.788016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:44.348 [2024-09-28 16:08:58.788028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.348 [2024-09-28 16:08:58.790389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.348 [2024-09-28 16:08:58.790425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.348 [2024-09-28 16:08:58.790492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:44.348 [2024-09-28 16:08:58.790544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.348 pt1 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.348 "name": "raid_bdev1", 00:07:44.348 "uuid": "896ce436-09de-4f20-9af5-11fdb853cef1", 00:07:44.348 "strip_size_kb": 64, 00:07:44.348 "state": "configuring", 00:07:44.348 "raid_level": "raid0", 00:07:44.348 "superblock": true, 00:07:44.348 "num_base_bdevs": 2, 00:07:44.348 "num_base_bdevs_discovered": 1, 00:07:44.348 "num_base_bdevs_operational": 2, 00:07:44.348 "base_bdevs_list": [ 00:07:44.348 { 00:07:44.348 "name": "pt1", 00:07:44.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.348 "is_configured": true, 00:07:44.348 "data_offset": 2048, 00:07:44.348 "data_size": 63488 00:07:44.348 }, 00:07:44.348 { 00:07:44.348 "name": null, 00:07:44.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.348 "is_configured": false, 00:07:44.348 "data_offset": 2048, 00:07:44.348 "data_size": 63488 00:07:44.348 } 00:07:44.348 ] 00:07:44.348 }' 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.348 16:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.608 [2024-09-28 16:08:59.191285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.608 [2024-09-28 16:08:59.191366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.608 [2024-09-28 16:08:59.191387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:44.608 [2024-09-28 16:08:59.191398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.608 [2024-09-28 16:08:59.191874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.608 [2024-09-28 16:08:59.191903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.608 [2024-09-28 16:08:59.191980] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:44.608 [2024-09-28 16:08:59.192006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.608 [2024-09-28 16:08:59.192120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.608 [2024-09-28 16:08:59.192136] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.608 [2024-09-28 16:08:59.192396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:44.608 [2024-09-28 16:08:59.192562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.608 [2024-09-28 16:08:59.192575] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:44.608 [2024-09-28 16:08:59.192706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.608 pt2 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.608 "name": "raid_bdev1", 00:07:44.608 "uuid": "896ce436-09de-4f20-9af5-11fdb853cef1", 00:07:44.608 "strip_size_kb": 64, 00:07:44.608 "state": "online", 00:07:44.608 "raid_level": "raid0", 00:07:44.608 "superblock": true, 00:07:44.608 "num_base_bdevs": 2, 00:07:44.608 "num_base_bdevs_discovered": 2, 00:07:44.608 "num_base_bdevs_operational": 2, 00:07:44.608 "base_bdevs_list": [ 00:07:44.608 { 00:07:44.608 "name": "pt1", 00:07:44.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.608 "is_configured": true, 00:07:44.608 "data_offset": 2048, 00:07:44.608 "data_size": 63488 00:07:44.608 }, 00:07:44.608 { 00:07:44.608 "name": "pt2", 00:07:44.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.608 "is_configured": true, 00:07:44.608 "data_offset": 2048, 00:07:44.608 "data_size": 63488 00:07:44.608 } 00:07:44.608 ] 00:07:44.608 }' 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.608 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.178 [2024-09-28 16:08:59.646738] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.178 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.178 "name": "raid_bdev1", 00:07:45.178 "aliases": [ 00:07:45.178 "896ce436-09de-4f20-9af5-11fdb853cef1" 00:07:45.178 ], 00:07:45.178 "product_name": "Raid Volume", 00:07:45.178 "block_size": 512, 00:07:45.178 "num_blocks": 126976, 00:07:45.178 "uuid": "896ce436-09de-4f20-9af5-11fdb853cef1", 00:07:45.178 "assigned_rate_limits": { 00:07:45.178 "rw_ios_per_sec": 0, 00:07:45.178 "rw_mbytes_per_sec": 0, 00:07:45.178 "r_mbytes_per_sec": 0, 00:07:45.178 "w_mbytes_per_sec": 0 00:07:45.178 }, 00:07:45.178 "claimed": false, 00:07:45.178 "zoned": false, 00:07:45.178 "supported_io_types": { 00:07:45.178 "read": true, 00:07:45.178 "write": true, 00:07:45.178 "unmap": true, 00:07:45.178 "flush": true, 00:07:45.178 "reset": true, 00:07:45.178 "nvme_admin": false, 00:07:45.178 "nvme_io": false, 00:07:45.178 "nvme_io_md": false, 00:07:45.178 "write_zeroes": true, 00:07:45.178 "zcopy": false, 00:07:45.178 "get_zone_info": false, 00:07:45.178 "zone_management": false, 00:07:45.178 "zone_append": false, 00:07:45.179 "compare": false, 00:07:45.179 "compare_and_write": false, 00:07:45.179 "abort": false, 00:07:45.179 "seek_hole": false, 00:07:45.179 "seek_data": false, 00:07:45.179 "copy": false, 00:07:45.179 "nvme_iov_md": false 00:07:45.179 }, 00:07:45.179 "memory_domains": [ 00:07:45.179 { 00:07:45.179 "dma_device_id": "system", 00:07:45.179 "dma_device_type": 1 00:07:45.179 }, 00:07:45.179 { 00:07:45.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.179 "dma_device_type": 2 00:07:45.179 }, 00:07:45.179 { 00:07:45.179 "dma_device_id": "system", 00:07:45.179 "dma_device_type": 1 00:07:45.179 }, 00:07:45.179 { 00:07:45.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.179 "dma_device_type": 2 00:07:45.179 } 00:07:45.179 ], 00:07:45.179 "driver_specific": { 00:07:45.179 "raid": { 00:07:45.179 "uuid": "896ce436-09de-4f20-9af5-11fdb853cef1", 00:07:45.179 "strip_size_kb": 64, 00:07:45.179 "state": "online", 00:07:45.179 "raid_level": "raid0", 00:07:45.179 "superblock": true, 00:07:45.179 "num_base_bdevs": 2, 00:07:45.179 "num_base_bdevs_discovered": 2, 00:07:45.179 "num_base_bdevs_operational": 2, 00:07:45.179 "base_bdevs_list": [ 00:07:45.179 { 00:07:45.179 "name": "pt1", 00:07:45.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.179 "is_configured": true, 00:07:45.179 "data_offset": 2048, 00:07:45.179 "data_size": 63488 00:07:45.179 }, 00:07:45.179 { 00:07:45.179 "name": "pt2", 00:07:45.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.179 "is_configured": true, 00:07:45.179 "data_offset": 2048, 00:07:45.179 "data_size": 63488 00:07:45.179 } 00:07:45.179 ] 00:07:45.179 } 00:07:45.179 } 00:07:45.179 }' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:45.179 pt2' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.179 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:45.438 [2024-09-28 16:08:59.890321] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 896ce436-09de-4f20-9af5-11fdb853cef1 '!=' 896ce436-09de-4f20-9af5-11fdb853cef1 ']' 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61188 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61188 ']' 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61188 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61188 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.438 killing process with pid 61188 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61188' 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61188 00:07:45.438 [2024-09-28 16:08:59.968980] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.438 [2024-09-28 16:08:59.969053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.438 [2024-09-28 16:08:59.969094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.438 [2024-09-28 16:08:59.969105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:45.438 16:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61188 00:07:45.698 [2024-09-28 16:09:00.182132] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.080 16:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:47.080 00:07:47.080 real 0m4.681s 00:07:47.080 user 0m6.357s 00:07:47.080 sys 0m0.847s 00:07:47.080 16:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.080 16:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.080 ************************************ 00:07:47.080 END TEST raid_superblock_test 00:07:47.080 ************************************ 00:07:47.080 16:09:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:47.080 16:09:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:47.080 16:09:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.080 16:09:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.080 ************************************ 00:07:47.080 START TEST raid_read_error_test 00:07:47.080 ************************************ 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g2D6BaRxML 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61405 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61405 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61405 ']' 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.080 16:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.080 [2024-09-28 16:09:01.667853] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:47.080 [2024-09-28 16:09:01.667981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61405 ] 00:07:47.340 [2024-09-28 16:09:01.831081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.600 [2024-09-28 16:09:02.073568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.860 [2024-09-28 16:09:02.297998] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.860 [2024-09-28 16:09:02.298034] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.860 BaseBdev1_malloc 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.860 true 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.860 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.121 [2024-09-28 16:09:02.546290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:48.121 [2024-09-28 16:09:02.546361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.121 [2024-09-28 16:09:02.546379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:48.121 [2024-09-28 16:09:02.546391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.121 [2024-09-28 16:09:02.548768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.121 [2024-09-28 16:09:02.548807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:48.121 BaseBdev1 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.121 BaseBdev2_malloc 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.121 true 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.121 [2024-09-28 16:09:02.640760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:48.121 [2024-09-28 16:09:02.640832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.121 [2024-09-28 16:09:02.640851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:48.121 [2024-09-28 16:09:02.640862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.121 [2024-09-28 16:09:02.643249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.121 [2024-09-28 16:09:02.643284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:48.121 BaseBdev2 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.121 [2024-09-28 16:09:02.652821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.121 [2024-09-28 16:09:02.654877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.121 [2024-09-28 16:09:02.655080] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:48.121 [2024-09-28 16:09:02.655095] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.121 [2024-09-28 16:09:02.655361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.121 [2024-09-28 16:09:02.655531] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:48.121 [2024-09-28 16:09:02.655548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:48.121 [2024-09-28 16:09:02.655713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.121 "name": "raid_bdev1", 00:07:48.121 "uuid": "e2436a73-8008-4f98-bd2b-665e70f87a7b", 00:07:48.121 "strip_size_kb": 64, 00:07:48.121 "state": "online", 00:07:48.121 "raid_level": "raid0", 00:07:48.121 "superblock": true, 00:07:48.121 "num_base_bdevs": 2, 00:07:48.121 "num_base_bdevs_discovered": 2, 00:07:48.121 "num_base_bdevs_operational": 2, 00:07:48.121 "base_bdevs_list": [ 00:07:48.121 { 00:07:48.121 "name": "BaseBdev1", 00:07:48.121 "uuid": "d5d0205a-c120-529a-b9f6-97e62280baf3", 00:07:48.121 "is_configured": true, 00:07:48.121 "data_offset": 2048, 00:07:48.121 "data_size": 63488 00:07:48.121 }, 00:07:48.121 { 00:07:48.121 "name": "BaseBdev2", 00:07:48.121 "uuid": "db687633-9664-5883-bccc-ad942f5d8cee", 00:07:48.121 "is_configured": true, 00:07:48.121 "data_offset": 2048, 00:07:48.121 "data_size": 63488 00:07:48.121 } 00:07:48.121 ] 00:07:48.121 }' 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.121 16:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.381 16:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:48.381 16:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:48.641 [2024-09-28 16:09:03.133351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.582 "name": "raid_bdev1", 00:07:49.582 "uuid": "e2436a73-8008-4f98-bd2b-665e70f87a7b", 00:07:49.582 "strip_size_kb": 64, 00:07:49.582 "state": "online", 00:07:49.582 "raid_level": "raid0", 00:07:49.582 "superblock": true, 00:07:49.582 "num_base_bdevs": 2, 00:07:49.582 "num_base_bdevs_discovered": 2, 00:07:49.582 "num_base_bdevs_operational": 2, 00:07:49.582 "base_bdevs_list": [ 00:07:49.582 { 00:07:49.582 "name": "BaseBdev1", 00:07:49.582 "uuid": "d5d0205a-c120-529a-b9f6-97e62280baf3", 00:07:49.582 "is_configured": true, 00:07:49.582 "data_offset": 2048, 00:07:49.582 "data_size": 63488 00:07:49.582 }, 00:07:49.582 { 00:07:49.582 "name": "BaseBdev2", 00:07:49.582 "uuid": "db687633-9664-5883-bccc-ad942f5d8cee", 00:07:49.582 "is_configured": true, 00:07:49.582 "data_offset": 2048, 00:07:49.582 "data_size": 63488 00:07:49.582 } 00:07:49.582 ] 00:07:49.582 }' 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.582 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 [2024-09-28 16:09:04.513552] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.842 [2024-09-28 16:09:04.513598] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.842 [2024-09-28 16:09:04.516249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.842 [2024-09-28 16:09:04.516306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.842 [2024-09-28 16:09:04.516344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.842 [2024-09-28 16:09:04.516356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:49.842 { 00:07:49.842 "results": [ 00:07:49.842 { 00:07:49.842 "job": "raid_bdev1", 00:07:49.842 "core_mask": "0x1", 00:07:49.842 "workload": "randrw", 00:07:49.842 "percentage": 50, 00:07:49.842 "status": "finished", 00:07:49.842 "queue_depth": 1, 00:07:49.842 "io_size": 131072, 00:07:49.842 "runtime": 1.380831, 00:07:49.842 "iops": 15516.018976978356, 00:07:49.842 "mibps": 1939.5023721222944, 00:07:49.842 "io_failed": 1, 00:07:49.842 "io_timeout": 0, 00:07:49.842 "avg_latency_us": 90.40629329668032, 00:07:49.842 "min_latency_us": 24.593886462882097, 00:07:49.842 "max_latency_us": 1380.8349344978167 00:07:49.842 } 00:07:49.842 ], 00:07:49.842 "core_count": 1 00:07:49.842 } 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61405 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61405 ']' 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61405 00:07:49.842 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:50.102 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.102 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61405 00:07:50.102 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.102 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.102 killing process with pid 61405 00:07:50.102 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61405' 00:07:50.102 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61405 00:07:50.102 [2024-09-28 16:09:04.563045] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.102 16:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61405 00:07:50.102 [2024-09-28 16:09:04.711295] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g2D6BaRxML 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:51.484 00:07:51.484 real 0m4.522s 00:07:51.484 user 0m5.181s 00:07:51.484 sys 0m0.667s 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.484 16:09:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.484 ************************************ 00:07:51.484 END TEST raid_read_error_test 00:07:51.484 ************************************ 00:07:51.484 16:09:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:51.484 16:09:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:51.484 16:09:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.484 16:09:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.484 ************************************ 00:07:51.484 START TEST raid_write_error_test 00:07:51.484 ************************************ 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:51.484 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YMaSZxcDEf 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61545 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61545 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61545 ']' 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.743 16:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.743 [2024-09-28 16:09:06.255594] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:51.743 [2024-09-28 16:09:06.255717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61545 ] 00:07:51.743 [2024-09-28 16:09:06.418288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.004 [2024-09-28 16:09:06.653977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.263 [2024-09-28 16:09:06.880886] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.263 [2024-09-28 16:09:06.880935] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 BaseBdev1_malloc 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 true 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 [2024-09-28 16:09:07.130617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:52.523 [2024-09-28 16:09:07.130678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.523 [2024-09-28 16:09:07.130711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:52.523 [2024-09-28 16:09:07.130723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.523 [2024-09-28 16:09:07.133132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.523 [2024-09-28 16:09:07.133171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:52.523 BaseBdev1 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.523 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 BaseBdev2_malloc 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 true 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 [2024-09-28 16:09:07.235850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:52.904 [2024-09-28 16:09:07.235907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.904 [2024-09-28 16:09:07.235939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:52.904 [2024-09-28 16:09:07.235951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.904 [2024-09-28 16:09:07.238333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.904 [2024-09-28 16:09:07.238370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:52.904 BaseBdev2 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 [2024-09-28 16:09:07.247917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.904 [2024-09-28 16:09:07.249956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.904 [2024-09-28 16:09:07.250135] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.904 [2024-09-28 16:09:07.250150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.904 [2024-09-28 16:09:07.250418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.904 [2024-09-28 16:09:07.250594] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.904 [2024-09-28 16:09:07.250609] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:52.904 [2024-09-28 16:09:07.250759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.904 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.904 "name": "raid_bdev1", 00:07:52.904 "uuid": "034e772a-f1f5-4b77-8f7b-442875c8ce94", 00:07:52.904 "strip_size_kb": 64, 00:07:52.904 "state": "online", 00:07:52.904 "raid_level": "raid0", 00:07:52.904 "superblock": true, 00:07:52.904 "num_base_bdevs": 2, 00:07:52.904 "num_base_bdevs_discovered": 2, 00:07:52.904 "num_base_bdevs_operational": 2, 00:07:52.904 "base_bdevs_list": [ 00:07:52.904 { 00:07:52.904 "name": "BaseBdev1", 00:07:52.904 "uuid": "a4bc890a-bf3f-5bb0-9523-fab3bd8a3b8e", 00:07:52.904 "is_configured": true, 00:07:52.904 "data_offset": 2048, 00:07:52.904 "data_size": 63488 00:07:52.904 }, 00:07:52.904 { 00:07:52.904 "name": "BaseBdev2", 00:07:52.904 "uuid": "71c0c7e4-a1a8-563d-9a4c-6b10f3ab0f64", 00:07:52.904 "is_configured": true, 00:07:52.904 "data_offset": 2048, 00:07:52.904 "data_size": 63488 00:07:52.904 } 00:07:52.904 ] 00:07:52.904 }' 00:07:52.905 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.905 16:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.177 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.177 16:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:53.177 [2024-09-28 16:09:07.740398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.116 "name": "raid_bdev1", 00:07:54.116 "uuid": "034e772a-f1f5-4b77-8f7b-442875c8ce94", 00:07:54.116 "strip_size_kb": 64, 00:07:54.116 "state": "online", 00:07:54.116 "raid_level": "raid0", 00:07:54.116 "superblock": true, 00:07:54.116 "num_base_bdevs": 2, 00:07:54.116 "num_base_bdevs_discovered": 2, 00:07:54.116 "num_base_bdevs_operational": 2, 00:07:54.116 "base_bdevs_list": [ 00:07:54.116 { 00:07:54.116 "name": "BaseBdev1", 00:07:54.116 "uuid": "a4bc890a-bf3f-5bb0-9523-fab3bd8a3b8e", 00:07:54.116 "is_configured": true, 00:07:54.116 "data_offset": 2048, 00:07:54.116 "data_size": 63488 00:07:54.116 }, 00:07:54.116 { 00:07:54.116 "name": "BaseBdev2", 00:07:54.116 "uuid": "71c0c7e4-a1a8-563d-9a4c-6b10f3ab0f64", 00:07:54.116 "is_configured": true, 00:07:54.116 "data_offset": 2048, 00:07:54.116 "data_size": 63488 00:07:54.116 } 00:07:54.116 ] 00:07:54.116 }' 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.116 16:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.684 16:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.684 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.684 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.684 [2024-09-28 16:09:09.124908] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.684 [2024-09-28 16:09:09.124956] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.684 [2024-09-28 16:09:09.127526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.684 [2024-09-28 16:09:09.127579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.684 [2024-09-28 16:09:09.127616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.684 [2024-09-28 16:09:09.127631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:54.684 { 00:07:54.684 "results": [ 00:07:54.684 { 00:07:54.684 "job": "raid_bdev1", 00:07:54.684 "core_mask": "0x1", 00:07:54.684 "workload": "randrw", 00:07:54.684 "percentage": 50, 00:07:54.684 "status": "finished", 00:07:54.684 "queue_depth": 1, 00:07:54.685 "io_size": 131072, 00:07:54.685 "runtime": 1.385284, 00:07:54.685 "iops": 15395.399066184262, 00:07:54.685 "mibps": 1924.4248832730327, 00:07:54.685 "io_failed": 1, 00:07:54.685 "io_timeout": 0, 00:07:54.685 "avg_latency_us": 91.24129503991719, 00:07:54.685 "min_latency_us": 24.929257641921396, 00:07:54.685 "max_latency_us": 1373.6803493449781 00:07:54.685 } 00:07:54.685 ], 00:07:54.685 "core_count": 1 00:07:54.685 } 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61545 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61545 ']' 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61545 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61545 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.685 killing process with pid 61545 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61545' 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61545 00:07:54.685 [2024-09-28 16:09:09.175570] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.685 16:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61545 00:07:54.685 [2024-09-28 16:09:09.318978] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YMaSZxcDEf 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:56.064 00:07:56.064 real 0m4.544s 00:07:56.064 user 0m5.240s 00:07:56.064 sys 0m0.650s 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.064 16:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.064 ************************************ 00:07:56.064 END TEST raid_write_error_test 00:07:56.064 ************************************ 00:07:56.324 16:09:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:56.324 16:09:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:56.324 16:09:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.324 16:09:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.324 16:09:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.324 ************************************ 00:07:56.324 START TEST raid_state_function_test 00:07:56.324 ************************************ 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61689 00:07:56.324 Process raid pid: 61689 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61689' 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61689 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61689 ']' 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.324 16:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.324 [2024-09-28 16:09:10.870043] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:56.324 [2024-09-28 16:09:10.870164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.583 [2024-09-28 16:09:11.041122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.843 [2024-09-28 16:09:11.283426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.843 [2024-09-28 16:09:11.515880] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.843 [2024-09-28 16:09:11.515914] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.102 [2024-09-28 16:09:11.687667] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.102 [2024-09-28 16:09:11.687726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.102 [2024-09-28 16:09:11.687735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.102 [2024-09-28 16:09:11.687745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.102 "name": "Existed_Raid", 00:07:57.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.102 "strip_size_kb": 64, 00:07:57.102 "state": "configuring", 00:07:57.102 "raid_level": "concat", 00:07:57.102 "superblock": false, 00:07:57.102 "num_base_bdevs": 2, 00:07:57.102 "num_base_bdevs_discovered": 0, 00:07:57.102 "num_base_bdevs_operational": 2, 00:07:57.102 "base_bdevs_list": [ 00:07:57.102 { 00:07:57.102 "name": "BaseBdev1", 00:07:57.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.102 "is_configured": false, 00:07:57.102 "data_offset": 0, 00:07:57.102 "data_size": 0 00:07:57.102 }, 00:07:57.102 { 00:07:57.102 "name": "BaseBdev2", 00:07:57.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.102 "is_configured": false, 00:07:57.102 "data_offset": 0, 00:07:57.102 "data_size": 0 00:07:57.102 } 00:07:57.102 ] 00:07:57.102 }' 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.102 16:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.671 [2024-09-28 16:09:12.126862] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.671 [2024-09-28 16:09:12.126899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.671 [2024-09-28 16:09:12.138861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.671 [2024-09-28 16:09:12.138902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.671 [2024-09-28 16:09:12.138911] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.671 [2024-09-28 16:09:12.138930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.671 [2024-09-28 16:09:12.224192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.671 BaseBdev1 00:07:57.671 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.672 [ 00:07:57.672 { 00:07:57.672 "name": "BaseBdev1", 00:07:57.672 "aliases": [ 00:07:57.672 "70ec2ca9-20df-475c-b75b-9fe4d824520f" 00:07:57.672 ], 00:07:57.672 "product_name": "Malloc disk", 00:07:57.672 "block_size": 512, 00:07:57.672 "num_blocks": 65536, 00:07:57.672 "uuid": "70ec2ca9-20df-475c-b75b-9fe4d824520f", 00:07:57.672 "assigned_rate_limits": { 00:07:57.672 "rw_ios_per_sec": 0, 00:07:57.672 "rw_mbytes_per_sec": 0, 00:07:57.672 "r_mbytes_per_sec": 0, 00:07:57.672 "w_mbytes_per_sec": 0 00:07:57.672 }, 00:07:57.672 "claimed": true, 00:07:57.672 "claim_type": "exclusive_write", 00:07:57.672 "zoned": false, 00:07:57.672 "supported_io_types": { 00:07:57.672 "read": true, 00:07:57.672 "write": true, 00:07:57.672 "unmap": true, 00:07:57.672 "flush": true, 00:07:57.672 "reset": true, 00:07:57.672 "nvme_admin": false, 00:07:57.672 "nvme_io": false, 00:07:57.672 "nvme_io_md": false, 00:07:57.672 "write_zeroes": true, 00:07:57.672 "zcopy": true, 00:07:57.672 "get_zone_info": false, 00:07:57.672 "zone_management": false, 00:07:57.672 "zone_append": false, 00:07:57.672 "compare": false, 00:07:57.672 "compare_and_write": false, 00:07:57.672 "abort": true, 00:07:57.672 "seek_hole": false, 00:07:57.672 "seek_data": false, 00:07:57.672 "copy": true, 00:07:57.672 "nvme_iov_md": false 00:07:57.672 }, 00:07:57.672 "memory_domains": [ 00:07:57.672 { 00:07:57.672 "dma_device_id": "system", 00:07:57.672 "dma_device_type": 1 00:07:57.672 }, 00:07:57.672 { 00:07:57.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.672 "dma_device_type": 2 00:07:57.672 } 00:07:57.672 ], 00:07:57.672 "driver_specific": {} 00:07:57.672 } 00:07:57.672 ] 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.672 "name": "Existed_Raid", 00:07:57.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.672 "strip_size_kb": 64, 00:07:57.672 "state": "configuring", 00:07:57.672 "raid_level": "concat", 00:07:57.672 "superblock": false, 00:07:57.672 "num_base_bdevs": 2, 00:07:57.672 "num_base_bdevs_discovered": 1, 00:07:57.672 "num_base_bdevs_operational": 2, 00:07:57.672 "base_bdevs_list": [ 00:07:57.672 { 00:07:57.672 "name": "BaseBdev1", 00:07:57.672 "uuid": "70ec2ca9-20df-475c-b75b-9fe4d824520f", 00:07:57.672 "is_configured": true, 00:07:57.672 "data_offset": 0, 00:07:57.672 "data_size": 65536 00:07:57.672 }, 00:07:57.672 { 00:07:57.672 "name": "BaseBdev2", 00:07:57.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.672 "is_configured": false, 00:07:57.672 "data_offset": 0, 00:07:57.672 "data_size": 0 00:07:57.672 } 00:07:57.672 ] 00:07:57.672 }' 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.672 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.241 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.241 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.241 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.241 [2024-09-28 16:09:12.727339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.241 [2024-09-28 16:09:12.727383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:58.241 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.241 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.241 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.241 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.241 [2024-09-28 16:09:12.739355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.241 [2024-09-28 16:09:12.741457] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.241 [2024-09-28 16:09:12.741497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.242 "name": "Existed_Raid", 00:07:58.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.242 "strip_size_kb": 64, 00:07:58.242 "state": "configuring", 00:07:58.242 "raid_level": "concat", 00:07:58.242 "superblock": false, 00:07:58.242 "num_base_bdevs": 2, 00:07:58.242 "num_base_bdevs_discovered": 1, 00:07:58.242 "num_base_bdevs_operational": 2, 00:07:58.242 "base_bdevs_list": [ 00:07:58.242 { 00:07:58.242 "name": "BaseBdev1", 00:07:58.242 "uuid": "70ec2ca9-20df-475c-b75b-9fe4d824520f", 00:07:58.242 "is_configured": true, 00:07:58.242 "data_offset": 0, 00:07:58.242 "data_size": 65536 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "name": "BaseBdev2", 00:07:58.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.242 "is_configured": false, 00:07:58.242 "data_offset": 0, 00:07:58.242 "data_size": 0 00:07:58.242 } 00:07:58.242 ] 00:07:58.242 }' 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.242 16:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.501 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.501 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.501 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.760 [2024-09-28 16:09:13.214315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.760 [2024-09-28 16:09:13.214367] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.760 [2024-09-28 16:09:13.214375] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:58.760 [2024-09-28 16:09:13.214671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.760 [2024-09-28 16:09:13.214877] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.760 [2024-09-28 16:09:13.214894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:58.760 [2024-09-28 16:09:13.215163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.760 BaseBdev2 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.760 [ 00:07:58.760 { 00:07:58.760 "name": "BaseBdev2", 00:07:58.760 "aliases": [ 00:07:58.760 "58fe72a6-535e-4c34-8d8c-cfcdc4f05425" 00:07:58.760 ], 00:07:58.760 "product_name": "Malloc disk", 00:07:58.760 "block_size": 512, 00:07:58.760 "num_blocks": 65536, 00:07:58.760 "uuid": "58fe72a6-535e-4c34-8d8c-cfcdc4f05425", 00:07:58.760 "assigned_rate_limits": { 00:07:58.760 "rw_ios_per_sec": 0, 00:07:58.760 "rw_mbytes_per_sec": 0, 00:07:58.760 "r_mbytes_per_sec": 0, 00:07:58.760 "w_mbytes_per_sec": 0 00:07:58.760 }, 00:07:58.760 "claimed": true, 00:07:58.760 "claim_type": "exclusive_write", 00:07:58.760 "zoned": false, 00:07:58.760 "supported_io_types": { 00:07:58.760 "read": true, 00:07:58.760 "write": true, 00:07:58.760 "unmap": true, 00:07:58.760 "flush": true, 00:07:58.760 "reset": true, 00:07:58.760 "nvme_admin": false, 00:07:58.760 "nvme_io": false, 00:07:58.760 "nvme_io_md": false, 00:07:58.760 "write_zeroes": true, 00:07:58.760 "zcopy": true, 00:07:58.760 "get_zone_info": false, 00:07:58.760 "zone_management": false, 00:07:58.760 "zone_append": false, 00:07:58.760 "compare": false, 00:07:58.760 "compare_and_write": false, 00:07:58.760 "abort": true, 00:07:58.760 "seek_hole": false, 00:07:58.760 "seek_data": false, 00:07:58.760 "copy": true, 00:07:58.760 "nvme_iov_md": false 00:07:58.760 }, 00:07:58.760 "memory_domains": [ 00:07:58.760 { 00:07:58.760 "dma_device_id": "system", 00:07:58.760 "dma_device_type": 1 00:07:58.760 }, 00:07:58.760 { 00:07:58.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.760 "dma_device_type": 2 00:07:58.760 } 00:07:58.760 ], 00:07:58.760 "driver_specific": {} 00:07:58.760 } 00:07:58.760 ] 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.760 "name": "Existed_Raid", 00:07:58.760 "uuid": "cd955c48-c6a2-40f6-9884-1b8fa0bbc420", 00:07:58.760 "strip_size_kb": 64, 00:07:58.760 "state": "online", 00:07:58.760 "raid_level": "concat", 00:07:58.760 "superblock": false, 00:07:58.760 "num_base_bdevs": 2, 00:07:58.760 "num_base_bdevs_discovered": 2, 00:07:58.760 "num_base_bdevs_operational": 2, 00:07:58.760 "base_bdevs_list": [ 00:07:58.760 { 00:07:58.760 "name": "BaseBdev1", 00:07:58.760 "uuid": "70ec2ca9-20df-475c-b75b-9fe4d824520f", 00:07:58.760 "is_configured": true, 00:07:58.760 "data_offset": 0, 00:07:58.760 "data_size": 65536 00:07:58.760 }, 00:07:58.760 { 00:07:58.760 "name": "BaseBdev2", 00:07:58.760 "uuid": "58fe72a6-535e-4c34-8d8c-cfcdc4f05425", 00:07:58.760 "is_configured": true, 00:07:58.760 "data_offset": 0, 00:07:58.760 "data_size": 65536 00:07:58.760 } 00:07:58.760 ] 00:07:58.760 }' 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.760 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.020 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.020 [2024-09-28 16:09:13.697726] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.280 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.280 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.280 "name": "Existed_Raid", 00:07:59.280 "aliases": [ 00:07:59.280 "cd955c48-c6a2-40f6-9884-1b8fa0bbc420" 00:07:59.280 ], 00:07:59.280 "product_name": "Raid Volume", 00:07:59.281 "block_size": 512, 00:07:59.281 "num_blocks": 131072, 00:07:59.281 "uuid": "cd955c48-c6a2-40f6-9884-1b8fa0bbc420", 00:07:59.281 "assigned_rate_limits": { 00:07:59.281 "rw_ios_per_sec": 0, 00:07:59.281 "rw_mbytes_per_sec": 0, 00:07:59.281 "r_mbytes_per_sec": 0, 00:07:59.281 "w_mbytes_per_sec": 0 00:07:59.281 }, 00:07:59.281 "claimed": false, 00:07:59.281 "zoned": false, 00:07:59.281 "supported_io_types": { 00:07:59.281 "read": true, 00:07:59.281 "write": true, 00:07:59.281 "unmap": true, 00:07:59.281 "flush": true, 00:07:59.281 "reset": true, 00:07:59.281 "nvme_admin": false, 00:07:59.281 "nvme_io": false, 00:07:59.281 "nvme_io_md": false, 00:07:59.281 "write_zeroes": true, 00:07:59.281 "zcopy": false, 00:07:59.281 "get_zone_info": false, 00:07:59.281 "zone_management": false, 00:07:59.281 "zone_append": false, 00:07:59.281 "compare": false, 00:07:59.281 "compare_and_write": false, 00:07:59.281 "abort": false, 00:07:59.281 "seek_hole": false, 00:07:59.281 "seek_data": false, 00:07:59.281 "copy": false, 00:07:59.281 "nvme_iov_md": false 00:07:59.281 }, 00:07:59.281 "memory_domains": [ 00:07:59.281 { 00:07:59.281 "dma_device_id": "system", 00:07:59.281 "dma_device_type": 1 00:07:59.281 }, 00:07:59.281 { 00:07:59.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.281 "dma_device_type": 2 00:07:59.281 }, 00:07:59.281 { 00:07:59.281 "dma_device_id": "system", 00:07:59.281 "dma_device_type": 1 00:07:59.281 }, 00:07:59.281 { 00:07:59.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.281 "dma_device_type": 2 00:07:59.281 } 00:07:59.281 ], 00:07:59.281 "driver_specific": { 00:07:59.281 "raid": { 00:07:59.281 "uuid": "cd955c48-c6a2-40f6-9884-1b8fa0bbc420", 00:07:59.281 "strip_size_kb": 64, 00:07:59.281 "state": "online", 00:07:59.281 "raid_level": "concat", 00:07:59.281 "superblock": false, 00:07:59.281 "num_base_bdevs": 2, 00:07:59.281 "num_base_bdevs_discovered": 2, 00:07:59.281 "num_base_bdevs_operational": 2, 00:07:59.281 "base_bdevs_list": [ 00:07:59.281 { 00:07:59.281 "name": "BaseBdev1", 00:07:59.281 "uuid": "70ec2ca9-20df-475c-b75b-9fe4d824520f", 00:07:59.281 "is_configured": true, 00:07:59.281 "data_offset": 0, 00:07:59.281 "data_size": 65536 00:07:59.281 }, 00:07:59.281 { 00:07:59.281 "name": "BaseBdev2", 00:07:59.281 "uuid": "58fe72a6-535e-4c34-8d8c-cfcdc4f05425", 00:07:59.281 "is_configured": true, 00:07:59.281 "data_offset": 0, 00:07:59.281 "data_size": 65536 00:07:59.281 } 00:07:59.281 ] 00:07:59.281 } 00:07:59.281 } 00:07:59.281 }' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.281 BaseBdev2' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.281 16:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.281 [2024-09-28 16:09:13.925143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.281 [2024-09-28 16:09:13.925220] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.281 [2024-09-28 16:09:13.925289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.544 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.544 "name": "Existed_Raid", 00:07:59.544 "uuid": "cd955c48-c6a2-40f6-9884-1b8fa0bbc420", 00:07:59.544 "strip_size_kb": 64, 00:07:59.544 "state": "offline", 00:07:59.544 "raid_level": "concat", 00:07:59.544 "superblock": false, 00:07:59.544 "num_base_bdevs": 2, 00:07:59.544 "num_base_bdevs_discovered": 1, 00:07:59.544 "num_base_bdevs_operational": 1, 00:07:59.544 "base_bdevs_list": [ 00:07:59.545 { 00:07:59.545 "name": null, 00:07:59.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.545 "is_configured": false, 00:07:59.545 "data_offset": 0, 00:07:59.545 "data_size": 65536 00:07:59.545 }, 00:07:59.545 { 00:07:59.545 "name": "BaseBdev2", 00:07:59.545 "uuid": "58fe72a6-535e-4c34-8d8c-cfcdc4f05425", 00:07:59.545 "is_configured": true, 00:07:59.545 "data_offset": 0, 00:07:59.545 "data_size": 65536 00:07:59.545 } 00:07:59.545 ] 00:07:59.545 }' 00:07:59.545 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.545 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.805 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:59.805 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.805 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.805 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.805 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.805 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.805 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.065 [2024-09-28 16:09:14.504868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.065 [2024-09-28 16:09:14.504979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61689 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61689 ']' 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61689 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61689 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61689' 00:08:00.065 killing process with pid 61689 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61689 00:08:00.065 [2024-09-28 16:09:14.704963] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.065 16:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61689 00:08:00.065 [2024-09-28 16:09:14.722256] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:01.446 ************************************ 00:08:01.446 END TEST raid_state_function_test 00:08:01.446 ************************************ 00:08:01.446 00:08:01.446 real 0m5.281s 00:08:01.446 user 0m7.342s 00:08:01.446 sys 0m0.941s 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.446 16:09:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:01.446 16:09:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:01.446 16:09:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.446 16:09:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.446 ************************************ 00:08:01.446 START TEST raid_state_function_test_sb 00:08:01.446 ************************************ 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:01.446 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61942 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61942' 00:08:01.447 Process raid pid: 61942 00:08:01.447 16:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61942 00:08:01.706 16:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61942 ']' 00:08:01.706 16:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.706 16:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.706 16:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.706 16:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.706 16:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.706 [2024-09-28 16:09:16.211929] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:01.706 [2024-09-28 16:09:16.212125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.706 [2024-09-28 16:09:16.376206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.966 [2024-09-28 16:09:16.625373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.225 [2024-09-28 16:09:16.856044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.225 [2024-09-28 16:09:16.856087] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 [2024-09-28 16:09:17.051223] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.483 [2024-09-28 16:09:17.051296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.483 [2024-09-28 16:09:17.051307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.483 [2024-09-28 16:09:17.051317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.484 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.484 "name": "Existed_Raid", 00:08:02.484 "uuid": "c927cfd2-7b4c-4b0a-89bc-97f29d02165d", 00:08:02.484 "strip_size_kb": 64, 00:08:02.484 "state": "configuring", 00:08:02.484 "raid_level": "concat", 00:08:02.484 "superblock": true, 00:08:02.484 "num_base_bdevs": 2, 00:08:02.484 "num_base_bdevs_discovered": 0, 00:08:02.484 "num_base_bdevs_operational": 2, 00:08:02.484 "base_bdevs_list": [ 00:08:02.484 { 00:08:02.484 "name": "BaseBdev1", 00:08:02.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.484 "is_configured": false, 00:08:02.484 "data_offset": 0, 00:08:02.484 "data_size": 0 00:08:02.484 }, 00:08:02.484 { 00:08:02.484 "name": "BaseBdev2", 00:08:02.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.484 "is_configured": false, 00:08:02.484 "data_offset": 0, 00:08:02.484 "data_size": 0 00:08:02.484 } 00:08:02.484 ] 00:08:02.484 }' 00:08:02.484 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.484 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.053 [2024-09-28 16:09:17.542347] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.053 [2024-09-28 16:09:17.542434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.053 [2024-09-28 16:09:17.554349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.053 [2024-09-28 16:09:17.554436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.053 [2024-09-28 16:09:17.554462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.053 [2024-09-28 16:09:17.554488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.053 [2024-09-28 16:09:17.640313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.053 BaseBdev1 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.053 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.053 [ 00:08:03.053 { 00:08:03.053 "name": "BaseBdev1", 00:08:03.053 "aliases": [ 00:08:03.053 "eee11d05-8df3-4697-8942-e1964a76dbd6" 00:08:03.053 ], 00:08:03.053 "product_name": "Malloc disk", 00:08:03.053 "block_size": 512, 00:08:03.053 "num_blocks": 65536, 00:08:03.053 "uuid": "eee11d05-8df3-4697-8942-e1964a76dbd6", 00:08:03.053 "assigned_rate_limits": { 00:08:03.053 "rw_ios_per_sec": 0, 00:08:03.053 "rw_mbytes_per_sec": 0, 00:08:03.053 "r_mbytes_per_sec": 0, 00:08:03.053 "w_mbytes_per_sec": 0 00:08:03.053 }, 00:08:03.053 "claimed": true, 00:08:03.053 "claim_type": "exclusive_write", 00:08:03.053 "zoned": false, 00:08:03.053 "supported_io_types": { 00:08:03.053 "read": true, 00:08:03.053 "write": true, 00:08:03.053 "unmap": true, 00:08:03.053 "flush": true, 00:08:03.053 "reset": true, 00:08:03.053 "nvme_admin": false, 00:08:03.053 "nvme_io": false, 00:08:03.053 "nvme_io_md": false, 00:08:03.053 "write_zeroes": true, 00:08:03.053 "zcopy": true, 00:08:03.053 "get_zone_info": false, 00:08:03.053 "zone_management": false, 00:08:03.053 "zone_append": false, 00:08:03.053 "compare": false, 00:08:03.053 "compare_and_write": false, 00:08:03.054 "abort": true, 00:08:03.054 "seek_hole": false, 00:08:03.054 "seek_data": false, 00:08:03.054 "copy": true, 00:08:03.054 "nvme_iov_md": false 00:08:03.054 }, 00:08:03.054 "memory_domains": [ 00:08:03.054 { 00:08:03.054 "dma_device_id": "system", 00:08:03.054 "dma_device_type": 1 00:08:03.054 }, 00:08:03.054 { 00:08:03.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.054 "dma_device_type": 2 00:08:03.054 } 00:08:03.054 ], 00:08:03.054 "driver_specific": {} 00:08:03.054 } 00:08:03.054 ] 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.054 "name": "Existed_Raid", 00:08:03.054 "uuid": "633f474f-1b5c-49e1-b401-96ebdadc3274", 00:08:03.054 "strip_size_kb": 64, 00:08:03.054 "state": "configuring", 00:08:03.054 "raid_level": "concat", 00:08:03.054 "superblock": true, 00:08:03.054 "num_base_bdevs": 2, 00:08:03.054 "num_base_bdevs_discovered": 1, 00:08:03.054 "num_base_bdevs_operational": 2, 00:08:03.054 "base_bdevs_list": [ 00:08:03.054 { 00:08:03.054 "name": "BaseBdev1", 00:08:03.054 "uuid": "eee11d05-8df3-4697-8942-e1964a76dbd6", 00:08:03.054 "is_configured": true, 00:08:03.054 "data_offset": 2048, 00:08:03.054 "data_size": 63488 00:08:03.054 }, 00:08:03.054 { 00:08:03.054 "name": "BaseBdev2", 00:08:03.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.054 "is_configured": false, 00:08:03.054 "data_offset": 0, 00:08:03.054 "data_size": 0 00:08:03.054 } 00:08:03.054 ] 00:08:03.054 }' 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.054 16:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 [2024-09-28 16:09:18.091510] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.623 [2024-09-28 16:09:18.091551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 [2024-09-28 16:09:18.103539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.623 [2024-09-28 16:09:18.105578] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.623 [2024-09-28 16:09:18.105666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.623 "name": "Existed_Raid", 00:08:03.623 "uuid": "f58cf9ba-74dd-4fd4-8bff-eb2730392f52", 00:08:03.623 "strip_size_kb": 64, 00:08:03.623 "state": "configuring", 00:08:03.623 "raid_level": "concat", 00:08:03.623 "superblock": true, 00:08:03.623 "num_base_bdevs": 2, 00:08:03.623 "num_base_bdevs_discovered": 1, 00:08:03.623 "num_base_bdevs_operational": 2, 00:08:03.623 "base_bdevs_list": [ 00:08:03.623 { 00:08:03.623 "name": "BaseBdev1", 00:08:03.623 "uuid": "eee11d05-8df3-4697-8942-e1964a76dbd6", 00:08:03.623 "is_configured": true, 00:08:03.623 "data_offset": 2048, 00:08:03.623 "data_size": 63488 00:08:03.623 }, 00:08:03.623 { 00:08:03.623 "name": "BaseBdev2", 00:08:03.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.623 "is_configured": false, 00:08:03.623 "data_offset": 0, 00:08:03.623 "data_size": 0 00:08:03.623 } 00:08:03.623 ] 00:08:03.623 }' 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.623 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.883 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:03.883 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.883 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.142 [2024-09-28 16:09:18.581619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.142 [2024-09-28 16:09:18.582007] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.142 [2024-09-28 16:09:18.582064] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.142 [2024-09-28 16:09:18.582433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:04.142 [2024-09-28 16:09:18.582628] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.142 [2024-09-28 16:09:18.582673] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:04.142 BaseBdev2 00:08:04.143 [2024-09-28 16:09:18.582874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.143 [ 00:08:04.143 { 00:08:04.143 "name": "BaseBdev2", 00:08:04.143 "aliases": [ 00:08:04.143 "d4256c10-867b-4bdf-93ff-c6be283ef46d" 00:08:04.143 ], 00:08:04.143 "product_name": "Malloc disk", 00:08:04.143 "block_size": 512, 00:08:04.143 "num_blocks": 65536, 00:08:04.143 "uuid": "d4256c10-867b-4bdf-93ff-c6be283ef46d", 00:08:04.143 "assigned_rate_limits": { 00:08:04.143 "rw_ios_per_sec": 0, 00:08:04.143 "rw_mbytes_per_sec": 0, 00:08:04.143 "r_mbytes_per_sec": 0, 00:08:04.143 "w_mbytes_per_sec": 0 00:08:04.143 }, 00:08:04.143 "claimed": true, 00:08:04.143 "claim_type": "exclusive_write", 00:08:04.143 "zoned": false, 00:08:04.143 "supported_io_types": { 00:08:04.143 "read": true, 00:08:04.143 "write": true, 00:08:04.143 "unmap": true, 00:08:04.143 "flush": true, 00:08:04.143 "reset": true, 00:08:04.143 "nvme_admin": false, 00:08:04.143 "nvme_io": false, 00:08:04.143 "nvme_io_md": false, 00:08:04.143 "write_zeroes": true, 00:08:04.143 "zcopy": true, 00:08:04.143 "get_zone_info": false, 00:08:04.143 "zone_management": false, 00:08:04.143 "zone_append": false, 00:08:04.143 "compare": false, 00:08:04.143 "compare_and_write": false, 00:08:04.143 "abort": true, 00:08:04.143 "seek_hole": false, 00:08:04.143 "seek_data": false, 00:08:04.143 "copy": true, 00:08:04.143 "nvme_iov_md": false 00:08:04.143 }, 00:08:04.143 "memory_domains": [ 00:08:04.143 { 00:08:04.143 "dma_device_id": "system", 00:08:04.143 "dma_device_type": 1 00:08:04.143 }, 00:08:04.143 { 00:08:04.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.143 "dma_device_type": 2 00:08:04.143 } 00:08:04.143 ], 00:08:04.143 "driver_specific": {} 00:08:04.143 } 00:08:04.143 ] 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.143 "name": "Existed_Raid", 00:08:04.143 "uuid": "f58cf9ba-74dd-4fd4-8bff-eb2730392f52", 00:08:04.143 "strip_size_kb": 64, 00:08:04.143 "state": "online", 00:08:04.143 "raid_level": "concat", 00:08:04.143 "superblock": true, 00:08:04.143 "num_base_bdevs": 2, 00:08:04.143 "num_base_bdevs_discovered": 2, 00:08:04.143 "num_base_bdevs_operational": 2, 00:08:04.143 "base_bdevs_list": [ 00:08:04.143 { 00:08:04.143 "name": "BaseBdev1", 00:08:04.143 "uuid": "eee11d05-8df3-4697-8942-e1964a76dbd6", 00:08:04.143 "is_configured": true, 00:08:04.143 "data_offset": 2048, 00:08:04.143 "data_size": 63488 00:08:04.143 }, 00:08:04.143 { 00:08:04.143 "name": "BaseBdev2", 00:08:04.143 "uuid": "d4256c10-867b-4bdf-93ff-c6be283ef46d", 00:08:04.143 "is_configured": true, 00:08:04.143 "data_offset": 2048, 00:08:04.143 "data_size": 63488 00:08:04.143 } 00:08:04.143 ] 00:08:04.143 }' 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.143 16:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.403 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.403 [2024-09-28 16:09:19.085008] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.663 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.663 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.663 "name": "Existed_Raid", 00:08:04.663 "aliases": [ 00:08:04.663 "f58cf9ba-74dd-4fd4-8bff-eb2730392f52" 00:08:04.663 ], 00:08:04.663 "product_name": "Raid Volume", 00:08:04.663 "block_size": 512, 00:08:04.663 "num_blocks": 126976, 00:08:04.663 "uuid": "f58cf9ba-74dd-4fd4-8bff-eb2730392f52", 00:08:04.663 "assigned_rate_limits": { 00:08:04.663 "rw_ios_per_sec": 0, 00:08:04.663 "rw_mbytes_per_sec": 0, 00:08:04.663 "r_mbytes_per_sec": 0, 00:08:04.663 "w_mbytes_per_sec": 0 00:08:04.663 }, 00:08:04.663 "claimed": false, 00:08:04.663 "zoned": false, 00:08:04.663 "supported_io_types": { 00:08:04.663 "read": true, 00:08:04.663 "write": true, 00:08:04.663 "unmap": true, 00:08:04.663 "flush": true, 00:08:04.663 "reset": true, 00:08:04.663 "nvme_admin": false, 00:08:04.663 "nvme_io": false, 00:08:04.663 "nvme_io_md": false, 00:08:04.663 "write_zeroes": true, 00:08:04.663 "zcopy": false, 00:08:04.663 "get_zone_info": false, 00:08:04.663 "zone_management": false, 00:08:04.663 "zone_append": false, 00:08:04.663 "compare": false, 00:08:04.663 "compare_and_write": false, 00:08:04.663 "abort": false, 00:08:04.663 "seek_hole": false, 00:08:04.663 "seek_data": false, 00:08:04.663 "copy": false, 00:08:04.663 "nvme_iov_md": false 00:08:04.663 }, 00:08:04.663 "memory_domains": [ 00:08:04.663 { 00:08:04.663 "dma_device_id": "system", 00:08:04.663 "dma_device_type": 1 00:08:04.663 }, 00:08:04.663 { 00:08:04.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.663 "dma_device_type": 2 00:08:04.663 }, 00:08:04.663 { 00:08:04.663 "dma_device_id": "system", 00:08:04.663 "dma_device_type": 1 00:08:04.663 }, 00:08:04.663 { 00:08:04.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.663 "dma_device_type": 2 00:08:04.664 } 00:08:04.664 ], 00:08:04.664 "driver_specific": { 00:08:04.664 "raid": { 00:08:04.664 "uuid": "f58cf9ba-74dd-4fd4-8bff-eb2730392f52", 00:08:04.664 "strip_size_kb": 64, 00:08:04.664 "state": "online", 00:08:04.664 "raid_level": "concat", 00:08:04.664 "superblock": true, 00:08:04.664 "num_base_bdevs": 2, 00:08:04.664 "num_base_bdevs_discovered": 2, 00:08:04.664 "num_base_bdevs_operational": 2, 00:08:04.664 "base_bdevs_list": [ 00:08:04.664 { 00:08:04.664 "name": "BaseBdev1", 00:08:04.664 "uuid": "eee11d05-8df3-4697-8942-e1964a76dbd6", 00:08:04.664 "is_configured": true, 00:08:04.664 "data_offset": 2048, 00:08:04.664 "data_size": 63488 00:08:04.664 }, 00:08:04.664 { 00:08:04.664 "name": "BaseBdev2", 00:08:04.664 "uuid": "d4256c10-867b-4bdf-93ff-c6be283ef46d", 00:08:04.664 "is_configured": true, 00:08:04.664 "data_offset": 2048, 00:08:04.664 "data_size": 63488 00:08:04.664 } 00:08:04.664 ] 00:08:04.664 } 00:08:04.664 } 00:08:04.664 }' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:04.664 BaseBdev2' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.664 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.664 [2024-09-28 16:09:19.300426] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.664 [2024-09-28 16:09:19.300500] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.664 [2024-09-28 16:09:19.300551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.924 "name": "Existed_Raid", 00:08:04.924 "uuid": "f58cf9ba-74dd-4fd4-8bff-eb2730392f52", 00:08:04.924 "strip_size_kb": 64, 00:08:04.924 "state": "offline", 00:08:04.924 "raid_level": "concat", 00:08:04.924 "superblock": true, 00:08:04.924 "num_base_bdevs": 2, 00:08:04.924 "num_base_bdevs_discovered": 1, 00:08:04.924 "num_base_bdevs_operational": 1, 00:08:04.924 "base_bdevs_list": [ 00:08:04.924 { 00:08:04.924 "name": null, 00:08:04.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.924 "is_configured": false, 00:08:04.924 "data_offset": 0, 00:08:04.924 "data_size": 63488 00:08:04.924 }, 00:08:04.924 { 00:08:04.924 "name": "BaseBdev2", 00:08:04.924 "uuid": "d4256c10-867b-4bdf-93ff-c6be283ef46d", 00:08:04.924 "is_configured": true, 00:08:04.924 "data_offset": 2048, 00:08:04.924 "data_size": 63488 00:08:04.924 } 00:08:04.924 ] 00:08:04.924 }' 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.924 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.184 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.184 [2024-09-28 16:09:19.844145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:05.184 [2024-09-28 16:09:19.844277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61942 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61942 ']' 00:08:05.444 16:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61942 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61942 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.444 killing process with pid 61942 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61942' 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61942 00:08:05.444 [2024-09-28 16:09:20.029720] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.444 16:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61942 00:08:05.444 [2024-09-28 16:09:20.046714] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.824 16:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.824 00:08:06.824 real 0m5.241s 00:08:06.824 user 0m7.312s 00:08:06.824 sys 0m0.922s 00:08:06.824 ************************************ 00:08:06.824 END TEST raid_state_function_test_sb 00:08:06.824 ************************************ 00:08:06.824 16:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.824 16:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.825 16:09:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:06.825 16:09:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:06.825 16:09:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.825 16:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.825 ************************************ 00:08:06.825 START TEST raid_superblock_test 00:08:06.825 ************************************ 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62194 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62194 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62194 ']' 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.825 16:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.084 [2024-09-28 16:09:21.518404] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:07.084 [2024-09-28 16:09:21.518556] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62194 ] 00:08:07.084 [2024-09-28 16:09:21.681704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.344 [2024-09-28 16:09:21.927413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.603 [2024-09-28 16:09:22.143794] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.603 [2024-09-28 16:09:22.143943] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.863 malloc1 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.863 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.863 [2024-09-28 16:09:22.396870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:07.863 [2024-09-28 16:09:22.396994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.863 [2024-09-28 16:09:22.397039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:07.863 [2024-09-28 16:09:22.397090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.863 [2024-09-28 16:09:22.399466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.863 [2024-09-28 16:09:22.399562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:07.863 pt1 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.864 malloc2 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.864 [2024-09-28 16:09:22.489282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:07.864 [2024-09-28 16:09:22.489395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.864 [2024-09-28 16:09:22.489438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:07.864 [2024-09-28 16:09:22.489484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.864 [2024-09-28 16:09:22.491847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.864 [2024-09-28 16:09:22.491922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:07.864 pt2 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.864 [2024-09-28 16:09:22.501329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:07.864 [2024-09-28 16:09:22.503408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:07.864 [2024-09-28 16:09:22.503629] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:07.864 [2024-09-28 16:09:22.503672] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.864 [2024-09-28 16:09:22.503921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:07.864 [2024-09-28 16:09:22.504099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:07.864 [2024-09-28 16:09:22.504141] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:07.864 [2024-09-28 16:09:22.504331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.864 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.123 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.123 "name": "raid_bdev1", 00:08:08.123 "uuid": "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b", 00:08:08.123 "strip_size_kb": 64, 00:08:08.123 "state": "online", 00:08:08.123 "raid_level": "concat", 00:08:08.123 "superblock": true, 00:08:08.123 "num_base_bdevs": 2, 00:08:08.123 "num_base_bdevs_discovered": 2, 00:08:08.124 "num_base_bdevs_operational": 2, 00:08:08.124 "base_bdevs_list": [ 00:08:08.124 { 00:08:08.124 "name": "pt1", 00:08:08.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.124 "is_configured": true, 00:08:08.124 "data_offset": 2048, 00:08:08.124 "data_size": 63488 00:08:08.124 }, 00:08:08.124 { 00:08:08.124 "name": "pt2", 00:08:08.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.124 "is_configured": true, 00:08:08.124 "data_offset": 2048, 00:08:08.124 "data_size": 63488 00:08:08.124 } 00:08:08.124 ] 00:08:08.124 }' 00:08:08.124 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.124 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.384 [2024-09-28 16:09:22.932768] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.384 "name": "raid_bdev1", 00:08:08.384 "aliases": [ 00:08:08.384 "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b" 00:08:08.384 ], 00:08:08.384 "product_name": "Raid Volume", 00:08:08.384 "block_size": 512, 00:08:08.384 "num_blocks": 126976, 00:08:08.384 "uuid": "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b", 00:08:08.384 "assigned_rate_limits": { 00:08:08.384 "rw_ios_per_sec": 0, 00:08:08.384 "rw_mbytes_per_sec": 0, 00:08:08.384 "r_mbytes_per_sec": 0, 00:08:08.384 "w_mbytes_per_sec": 0 00:08:08.384 }, 00:08:08.384 "claimed": false, 00:08:08.384 "zoned": false, 00:08:08.384 "supported_io_types": { 00:08:08.384 "read": true, 00:08:08.384 "write": true, 00:08:08.384 "unmap": true, 00:08:08.384 "flush": true, 00:08:08.384 "reset": true, 00:08:08.384 "nvme_admin": false, 00:08:08.384 "nvme_io": false, 00:08:08.384 "nvme_io_md": false, 00:08:08.384 "write_zeroes": true, 00:08:08.384 "zcopy": false, 00:08:08.384 "get_zone_info": false, 00:08:08.384 "zone_management": false, 00:08:08.384 "zone_append": false, 00:08:08.384 "compare": false, 00:08:08.384 "compare_and_write": false, 00:08:08.384 "abort": false, 00:08:08.384 "seek_hole": false, 00:08:08.384 "seek_data": false, 00:08:08.384 "copy": false, 00:08:08.384 "nvme_iov_md": false 00:08:08.384 }, 00:08:08.384 "memory_domains": [ 00:08:08.384 { 00:08:08.384 "dma_device_id": "system", 00:08:08.384 "dma_device_type": 1 00:08:08.384 }, 00:08:08.384 { 00:08:08.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.384 "dma_device_type": 2 00:08:08.384 }, 00:08:08.384 { 00:08:08.384 "dma_device_id": "system", 00:08:08.384 "dma_device_type": 1 00:08:08.384 }, 00:08:08.384 { 00:08:08.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.384 "dma_device_type": 2 00:08:08.384 } 00:08:08.384 ], 00:08:08.384 "driver_specific": { 00:08:08.384 "raid": { 00:08:08.384 "uuid": "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b", 00:08:08.384 "strip_size_kb": 64, 00:08:08.384 "state": "online", 00:08:08.384 "raid_level": "concat", 00:08:08.384 "superblock": true, 00:08:08.384 "num_base_bdevs": 2, 00:08:08.384 "num_base_bdevs_discovered": 2, 00:08:08.384 "num_base_bdevs_operational": 2, 00:08:08.384 "base_bdevs_list": [ 00:08:08.384 { 00:08:08.384 "name": "pt1", 00:08:08.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.384 "is_configured": true, 00:08:08.384 "data_offset": 2048, 00:08:08.384 "data_size": 63488 00:08:08.384 }, 00:08:08.384 { 00:08:08.384 "name": "pt2", 00:08:08.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.384 "is_configured": true, 00:08:08.384 "data_offset": 2048, 00:08:08.384 "data_size": 63488 00:08:08.384 } 00:08:08.384 ] 00:08:08.384 } 00:08:08.384 } 00:08:08.384 }' 00:08:08.384 16:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:08.384 pt2' 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.384 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.644 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:08.645 [2024-09-28 16:09:23.144364] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b ']' 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.645 [2024-09-28 16:09:23.188073] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.645 [2024-09-28 16:09:23.188137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.645 [2024-09-28 16:09:23.188213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.645 [2024-09-28 16:09:23.188285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.645 [2024-09-28 16:09:23.188301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.645 [2024-09-28 16:09:23.319841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:08.645 [2024-09-28 16:09:23.321907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:08.645 [2024-09-28 16:09:23.321972] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:08.645 [2024-09-28 16:09:23.322019] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:08.645 [2024-09-28 16:09:23.322034] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.645 [2024-09-28 16:09:23.322043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:08.645 request: 00:08:08.645 { 00:08:08.645 "name": "raid_bdev1", 00:08:08.645 "raid_level": "concat", 00:08:08.645 "base_bdevs": [ 00:08:08.645 "malloc1", 00:08:08.645 "malloc2" 00:08:08.645 ], 00:08:08.645 "strip_size_kb": 64, 00:08:08.645 "superblock": false, 00:08:08.645 "method": "bdev_raid_create", 00:08:08.645 "req_id": 1 00:08:08.645 } 00:08:08.645 Got JSON-RPC error response 00:08:08.645 response: 00:08:08.645 { 00:08:08.645 "code": -17, 00:08:08.645 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:08.645 } 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:08.645 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.905 [2024-09-28 16:09:23.371729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.905 [2024-09-28 16:09:23.371817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.905 [2024-09-28 16:09:23.371853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:08.905 [2024-09-28 16:09:23.371888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.905 [2024-09-28 16:09:23.374250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.905 [2024-09-28 16:09:23.374334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.905 [2024-09-28 16:09:23.374422] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:08.905 [2024-09-28 16:09:23.374510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.905 pt1 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.905 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.906 "name": "raid_bdev1", 00:08:08.906 "uuid": "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b", 00:08:08.906 "strip_size_kb": 64, 00:08:08.906 "state": "configuring", 00:08:08.906 "raid_level": "concat", 00:08:08.906 "superblock": true, 00:08:08.906 "num_base_bdevs": 2, 00:08:08.906 "num_base_bdevs_discovered": 1, 00:08:08.906 "num_base_bdevs_operational": 2, 00:08:08.906 "base_bdevs_list": [ 00:08:08.906 { 00:08:08.906 "name": "pt1", 00:08:08.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.906 "is_configured": true, 00:08:08.906 "data_offset": 2048, 00:08:08.906 "data_size": 63488 00:08:08.906 }, 00:08:08.906 { 00:08:08.906 "name": null, 00:08:08.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.906 "is_configured": false, 00:08:08.906 "data_offset": 2048, 00:08:08.906 "data_size": 63488 00:08:08.906 } 00:08:08.906 ] 00:08:08.906 }' 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.906 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.166 [2024-09-28 16:09:23.787015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:09.166 [2024-09-28 16:09:23.787078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.166 [2024-09-28 16:09:23.787100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:09.166 [2024-09-28 16:09:23.787111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.166 [2024-09-28 16:09:23.787586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.166 [2024-09-28 16:09:23.787656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:09.166 [2024-09-28 16:09:23.787733] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:09.166 [2024-09-28 16:09:23.787761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:09.166 [2024-09-28 16:09:23.787890] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.166 [2024-09-28 16:09:23.787902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:09.166 [2024-09-28 16:09:23.788149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:09.166 [2024-09-28 16:09:23.788309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.166 [2024-09-28 16:09:23.788319] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:09.166 [2024-09-28 16:09:23.788467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.166 pt2 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.166 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.166 "name": "raid_bdev1", 00:08:09.166 "uuid": "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b", 00:08:09.166 "strip_size_kb": 64, 00:08:09.166 "state": "online", 00:08:09.166 "raid_level": "concat", 00:08:09.166 "superblock": true, 00:08:09.166 "num_base_bdevs": 2, 00:08:09.166 "num_base_bdevs_discovered": 2, 00:08:09.166 "num_base_bdevs_operational": 2, 00:08:09.166 "base_bdevs_list": [ 00:08:09.166 { 00:08:09.166 "name": "pt1", 00:08:09.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.166 "is_configured": true, 00:08:09.167 "data_offset": 2048, 00:08:09.167 "data_size": 63488 00:08:09.167 }, 00:08:09.167 { 00:08:09.167 "name": "pt2", 00:08:09.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.167 "is_configured": true, 00:08:09.167 "data_offset": 2048, 00:08:09.167 "data_size": 63488 00:08:09.167 } 00:08:09.167 ] 00:08:09.167 }' 00:08:09.167 16:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.167 16:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.736 [2024-09-28 16:09:24.226627] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.736 "name": "raid_bdev1", 00:08:09.736 "aliases": [ 00:08:09.736 "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b" 00:08:09.736 ], 00:08:09.736 "product_name": "Raid Volume", 00:08:09.736 "block_size": 512, 00:08:09.736 "num_blocks": 126976, 00:08:09.736 "uuid": "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b", 00:08:09.736 "assigned_rate_limits": { 00:08:09.736 "rw_ios_per_sec": 0, 00:08:09.736 "rw_mbytes_per_sec": 0, 00:08:09.736 "r_mbytes_per_sec": 0, 00:08:09.736 "w_mbytes_per_sec": 0 00:08:09.736 }, 00:08:09.736 "claimed": false, 00:08:09.736 "zoned": false, 00:08:09.736 "supported_io_types": { 00:08:09.736 "read": true, 00:08:09.736 "write": true, 00:08:09.736 "unmap": true, 00:08:09.736 "flush": true, 00:08:09.736 "reset": true, 00:08:09.736 "nvme_admin": false, 00:08:09.736 "nvme_io": false, 00:08:09.736 "nvme_io_md": false, 00:08:09.736 "write_zeroes": true, 00:08:09.736 "zcopy": false, 00:08:09.736 "get_zone_info": false, 00:08:09.736 "zone_management": false, 00:08:09.736 "zone_append": false, 00:08:09.736 "compare": false, 00:08:09.736 "compare_and_write": false, 00:08:09.736 "abort": false, 00:08:09.736 "seek_hole": false, 00:08:09.736 "seek_data": false, 00:08:09.736 "copy": false, 00:08:09.736 "nvme_iov_md": false 00:08:09.736 }, 00:08:09.736 "memory_domains": [ 00:08:09.736 { 00:08:09.736 "dma_device_id": "system", 00:08:09.736 "dma_device_type": 1 00:08:09.736 }, 00:08:09.736 { 00:08:09.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.736 "dma_device_type": 2 00:08:09.736 }, 00:08:09.736 { 00:08:09.736 "dma_device_id": "system", 00:08:09.736 "dma_device_type": 1 00:08:09.736 }, 00:08:09.736 { 00:08:09.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.736 "dma_device_type": 2 00:08:09.736 } 00:08:09.736 ], 00:08:09.736 "driver_specific": { 00:08:09.736 "raid": { 00:08:09.736 "uuid": "dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b", 00:08:09.736 "strip_size_kb": 64, 00:08:09.736 "state": "online", 00:08:09.736 "raid_level": "concat", 00:08:09.736 "superblock": true, 00:08:09.736 "num_base_bdevs": 2, 00:08:09.736 "num_base_bdevs_discovered": 2, 00:08:09.736 "num_base_bdevs_operational": 2, 00:08:09.736 "base_bdevs_list": [ 00:08:09.736 { 00:08:09.736 "name": "pt1", 00:08:09.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.736 "is_configured": true, 00:08:09.736 "data_offset": 2048, 00:08:09.736 "data_size": 63488 00:08:09.736 }, 00:08:09.736 { 00:08:09.736 "name": "pt2", 00:08:09.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.736 "is_configured": true, 00:08:09.736 "data_offset": 2048, 00:08:09.736 "data_size": 63488 00:08:09.736 } 00:08:09.736 ] 00:08:09.736 } 00:08:09.736 } 00:08:09.736 }' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.736 pt2' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.736 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.996 [2024-09-28 16:09:24.426309] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b '!=' dbbf1bc3-ffa9-4937-a2aa-f58d0b505c6b ']' 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62194 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62194 ']' 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62194 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62194 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.996 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.997 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62194' 00:08:09.997 killing process with pid 62194 00:08:09.997 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62194 00:08:09.997 [2024-09-28 16:09:24.512550] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.997 [2024-09-28 16:09:24.512678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.997 [2024-09-28 16:09:24.512753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 16:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62194 00:08:09.997 ee all in destruct 00:08:09.997 [2024-09-28 16:09:24.512820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:10.257 [2024-09-28 16:09:24.726833] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.639 16:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:11.639 00:08:11.639 real 0m4.608s 00:08:11.639 user 0m6.153s 00:08:11.639 sys 0m0.872s 00:08:11.639 16:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.639 ************************************ 00:08:11.639 END TEST raid_superblock_test 00:08:11.639 ************************************ 00:08:11.639 16:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.639 16:09:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:11.639 16:09:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:11.639 16:09:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.639 16:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.639 ************************************ 00:08:11.639 START TEST raid_read_error_test 00:08:11.639 ************************************ 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:11.639 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OpHKZSHoZc 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62411 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62411 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62411 ']' 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.640 16:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.640 [2024-09-28 16:09:26.216427] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:11.640 [2024-09-28 16:09:26.216541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62411 ] 00:08:11.916 [2024-09-28 16:09:26.378306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.227 [2024-09-28 16:09:26.610101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.227 [2024-09-28 16:09:26.844683] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.227 [2024-09-28 16:09:26.844725] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.513 BaseBdev1_malloc 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.513 true 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.513 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.513 [2024-09-28 16:09:27.103617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:12.514 [2024-09-28 16:09:27.103721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.514 [2024-09-28 16:09:27.103758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:12.514 [2024-09-28 16:09:27.103770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.514 [2024-09-28 16:09:27.106137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.514 [2024-09-28 16:09:27.106176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:12.514 BaseBdev1 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.514 BaseBdev2_malloc 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.514 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.778 true 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.778 [2024-09-28 16:09:27.205735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:12.778 [2024-09-28 16:09:27.205791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.778 [2024-09-28 16:09:27.205809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:12.778 [2024-09-28 16:09:27.205820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.778 [2024-09-28 16:09:27.208166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.778 [2024-09-28 16:09:27.208285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:12.778 BaseBdev2 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.778 [2024-09-28 16:09:27.217795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.778 [2024-09-28 16:09:27.219850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.778 [2024-09-28 16:09:27.220090] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.778 [2024-09-28 16:09:27.220109] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:12.778 [2024-09-28 16:09:27.220348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.778 [2024-09-28 16:09:27.220518] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.778 [2024-09-28 16:09:27.220528] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:12.778 [2024-09-28 16:09:27.220686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.778 "name": "raid_bdev1", 00:08:12.778 "uuid": "e913172f-a4ef-4987-8889-1293f411d129", 00:08:12.778 "strip_size_kb": 64, 00:08:12.778 "state": "online", 00:08:12.778 "raid_level": "concat", 00:08:12.778 "superblock": true, 00:08:12.778 "num_base_bdevs": 2, 00:08:12.778 "num_base_bdevs_discovered": 2, 00:08:12.778 "num_base_bdevs_operational": 2, 00:08:12.778 "base_bdevs_list": [ 00:08:12.778 { 00:08:12.778 "name": "BaseBdev1", 00:08:12.778 "uuid": "5ca60bc2-7d82-5b17-b2a7-8487f8d2c7fc", 00:08:12.778 "is_configured": true, 00:08:12.778 "data_offset": 2048, 00:08:12.778 "data_size": 63488 00:08:12.778 }, 00:08:12.778 { 00:08:12.778 "name": "BaseBdev2", 00:08:12.778 "uuid": "ae132902-feef-560f-8328-d6f82a6989b2", 00:08:12.778 "is_configured": true, 00:08:12.778 "data_offset": 2048, 00:08:12.778 "data_size": 63488 00:08:12.778 } 00:08:12.778 ] 00:08:12.778 }' 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.778 16:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.038 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:13.038 16:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:13.298 [2024-09-28 16:09:27.738294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.238 "name": "raid_bdev1", 00:08:14.238 "uuid": "e913172f-a4ef-4987-8889-1293f411d129", 00:08:14.238 "strip_size_kb": 64, 00:08:14.238 "state": "online", 00:08:14.238 "raid_level": "concat", 00:08:14.238 "superblock": true, 00:08:14.238 "num_base_bdevs": 2, 00:08:14.238 "num_base_bdevs_discovered": 2, 00:08:14.238 "num_base_bdevs_operational": 2, 00:08:14.238 "base_bdevs_list": [ 00:08:14.238 { 00:08:14.238 "name": "BaseBdev1", 00:08:14.238 "uuid": "5ca60bc2-7d82-5b17-b2a7-8487f8d2c7fc", 00:08:14.238 "is_configured": true, 00:08:14.238 "data_offset": 2048, 00:08:14.238 "data_size": 63488 00:08:14.238 }, 00:08:14.238 { 00:08:14.238 "name": "BaseBdev2", 00:08:14.238 "uuid": "ae132902-feef-560f-8328-d6f82a6989b2", 00:08:14.238 "is_configured": true, 00:08:14.238 "data_offset": 2048, 00:08:14.238 "data_size": 63488 00:08:14.238 } 00:08:14.238 ] 00:08:14.238 }' 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.238 16:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.498 [2024-09-28 16:09:29.127332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.498 [2024-09-28 16:09:29.127371] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.498 [2024-09-28 16:09:29.130104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.498 [2024-09-28 16:09:29.130157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.498 [2024-09-28 16:09:29.130191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.498 [2024-09-28 16:09:29.130203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:14.498 { 00:08:14.498 "results": [ 00:08:14.498 { 00:08:14.498 "job": "raid_bdev1", 00:08:14.498 "core_mask": "0x1", 00:08:14.498 "workload": "randrw", 00:08:14.498 "percentage": 50, 00:08:14.498 "status": "finished", 00:08:14.498 "queue_depth": 1, 00:08:14.498 "io_size": 131072, 00:08:14.498 "runtime": 1.389664, 00:08:14.498 "iops": 15325.287263683884, 00:08:14.498 "mibps": 1915.6609079604855, 00:08:14.498 "io_failed": 1, 00:08:14.498 "io_timeout": 0, 00:08:14.498 "avg_latency_us": 91.53375124711876, 00:08:14.498 "min_latency_us": 24.817467248908297, 00:08:14.498 "max_latency_us": 1974.665502183406 00:08:14.498 } 00:08:14.498 ], 00:08:14.498 "core_count": 1 00:08:14.498 } 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62411 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62411 ']' 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62411 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62411 00:08:14.498 killing process with pid 62411 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62411' 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62411 00:08:14.498 [2024-09-28 16:09:29.177725] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.498 16:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62411 00:08:14.757 [2024-09-28 16:09:29.316376] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OpHKZSHoZc 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:16.138 00:08:16.138 real 0m4.586s 00:08:16.138 user 0m5.294s 00:08:16.138 sys 0m0.670s 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.138 16:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.138 ************************************ 00:08:16.138 END TEST raid_read_error_test 00:08:16.138 ************************************ 00:08:16.138 16:09:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:16.138 16:09:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.138 16:09:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.138 16:09:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.138 ************************************ 00:08:16.138 START TEST raid_write_error_test 00:08:16.138 ************************************ 00:08:16.138 16:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0N07WHTcEa 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62551 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62551 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62551 ']' 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.139 16:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.398 [2024-09-28 16:09:30.875868] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:16.398 [2024-09-28 16:09:30.876054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62551 ] 00:08:16.398 [2024-09-28 16:09:31.039347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.657 [2024-09-28 16:09:31.272484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.917 [2024-09-28 16:09:31.501430] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.917 [2024-09-28 16:09:31.501468] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.176 BaseBdev1_malloc 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.176 true 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.176 [2024-09-28 16:09:31.751920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.176 [2024-09-28 16:09:31.752078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.176 [2024-09-28 16:09:31.752100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.176 [2024-09-28 16:09:31.752112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.176 [2024-09-28 16:09:31.754577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.176 [2024-09-28 16:09:31.754611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.176 BaseBdev1 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.176 BaseBdev2_malloc 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.176 true 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.176 [2024-09-28 16:09:31.850556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.176 [2024-09-28 16:09:31.850611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.176 [2024-09-28 16:09:31.850643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.176 [2024-09-28 16:09:31.850654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.176 [2024-09-28 16:09:31.852986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.176 [2024-09-28 16:09:31.853026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.176 BaseBdev2 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.176 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.436 [2024-09-28 16:09:31.862610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.436 [2024-09-28 16:09:31.864654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.436 [2024-09-28 16:09:31.864852] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.436 [2024-09-28 16:09:31.864867] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:17.436 [2024-09-28 16:09:31.865105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.436 [2024-09-28 16:09:31.865278] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.436 [2024-09-28 16:09:31.865288] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:17.436 [2024-09-28 16:09:31.865450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.436 "name": "raid_bdev1", 00:08:17.436 "uuid": "6a798451-86db-4ae7-afd4-9a908a3c5d41", 00:08:17.436 "strip_size_kb": 64, 00:08:17.436 "state": "online", 00:08:17.436 "raid_level": "concat", 00:08:17.436 "superblock": true, 00:08:17.436 "num_base_bdevs": 2, 00:08:17.436 "num_base_bdevs_discovered": 2, 00:08:17.436 "num_base_bdevs_operational": 2, 00:08:17.436 "base_bdevs_list": [ 00:08:17.436 { 00:08:17.436 "name": "BaseBdev1", 00:08:17.436 "uuid": "30a302d1-e473-5a43-93e2-d10a588e00f1", 00:08:17.436 "is_configured": true, 00:08:17.436 "data_offset": 2048, 00:08:17.436 "data_size": 63488 00:08:17.436 }, 00:08:17.436 { 00:08:17.436 "name": "BaseBdev2", 00:08:17.436 "uuid": "20dd603c-c166-5b46-b981-e5f89e9dfe78", 00:08:17.436 "is_configured": true, 00:08:17.436 "data_offset": 2048, 00:08:17.436 "data_size": 63488 00:08:17.436 } 00:08:17.436 ] 00:08:17.436 }' 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.436 16:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.695 16:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.695 16:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.955 [2024-09-28 16:09:32.406977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.892 "name": "raid_bdev1", 00:08:18.892 "uuid": "6a798451-86db-4ae7-afd4-9a908a3c5d41", 00:08:18.892 "strip_size_kb": 64, 00:08:18.892 "state": "online", 00:08:18.892 "raid_level": "concat", 00:08:18.892 "superblock": true, 00:08:18.892 "num_base_bdevs": 2, 00:08:18.892 "num_base_bdevs_discovered": 2, 00:08:18.892 "num_base_bdevs_operational": 2, 00:08:18.892 "base_bdevs_list": [ 00:08:18.892 { 00:08:18.892 "name": "BaseBdev1", 00:08:18.892 "uuid": "30a302d1-e473-5a43-93e2-d10a588e00f1", 00:08:18.892 "is_configured": true, 00:08:18.892 "data_offset": 2048, 00:08:18.892 "data_size": 63488 00:08:18.892 }, 00:08:18.892 { 00:08:18.892 "name": "BaseBdev2", 00:08:18.892 "uuid": "20dd603c-c166-5b46-b981-e5f89e9dfe78", 00:08:18.892 "is_configured": true, 00:08:18.892 "data_offset": 2048, 00:08:18.892 "data_size": 63488 00:08:18.892 } 00:08:18.892 ] 00:08:18.892 }' 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.892 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.151 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.151 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 [2024-09-28 16:09:33.750993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.151 [2024-09-28 16:09:33.751133] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.151 [2024-09-28 16:09:33.753761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.151 [2024-09-28 16:09:33.753852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.151 [2024-09-28 16:09:33.753906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.151 [2024-09-28 16:09:33.753964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:19.151 { 00:08:19.151 "results": [ 00:08:19.151 { 00:08:19.151 "job": "raid_bdev1", 00:08:19.151 "core_mask": "0x1", 00:08:19.151 "workload": "randrw", 00:08:19.151 "percentage": 50, 00:08:19.151 "status": "finished", 00:08:19.151 "queue_depth": 1, 00:08:19.151 "io_size": 131072, 00:08:19.151 "runtime": 1.344751, 00:08:19.151 "iops": 15503.985496199668, 00:08:19.151 "mibps": 1937.9981870249585, 00:08:19.151 "io_failed": 1, 00:08:19.151 "io_timeout": 0, 00:08:19.151 "avg_latency_us": 90.55782652131569, 00:08:19.151 "min_latency_us": 24.370305676855896, 00:08:19.151 "max_latency_us": 1366.5257641921398 00:08:19.151 } 00:08:19.151 ], 00:08:19.151 "core_count": 1 00:08:19.152 } 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62551 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62551 ']' 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62551 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62551 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62551' 00:08:19.152 killing process with pid 62551 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62551 00:08:19.152 [2024-09-28 16:09:33.802445] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.152 16:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62551 00:08:19.411 [2024-09-28 16:09:33.940185] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0N07WHTcEa 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.791 ************************************ 00:08:20.791 END TEST raid_write_error_test 00:08:20.791 ************************************ 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:20.791 00:08:20.791 real 0m4.559s 00:08:20.791 user 0m5.257s 00:08:20.791 sys 0m0.651s 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.791 16:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.791 16:09:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:20.791 16:09:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:20.791 16:09:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.791 16:09:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.791 16:09:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.791 ************************************ 00:08:20.791 START TEST raid_state_function_test 00:08:20.791 ************************************ 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:20.792 Process raid pid: 62695 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62695 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62695' 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62695 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62695 ']' 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.792 16:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.051 [2024-09-28 16:09:35.508797] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:21.052 [2024-09-28 16:09:35.508963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.052 [2024-09-28 16:09:35.679279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.313 [2024-09-28 16:09:35.919272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.572 [2024-09-28 16:09:36.155974] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.572 [2024-09-28 16:09:36.156078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.831 [2024-09-28 16:09:36.332057] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.831 [2024-09-28 16:09:36.332126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.831 [2024-09-28 16:09:36.332135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.831 [2024-09-28 16:09:36.332161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.831 "name": "Existed_Raid", 00:08:21.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.831 "strip_size_kb": 0, 00:08:21.831 "state": "configuring", 00:08:21.831 "raid_level": "raid1", 00:08:21.831 "superblock": false, 00:08:21.831 "num_base_bdevs": 2, 00:08:21.831 "num_base_bdevs_discovered": 0, 00:08:21.831 "num_base_bdevs_operational": 2, 00:08:21.831 "base_bdevs_list": [ 00:08:21.831 { 00:08:21.831 "name": "BaseBdev1", 00:08:21.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.831 "is_configured": false, 00:08:21.831 "data_offset": 0, 00:08:21.831 "data_size": 0 00:08:21.831 }, 00:08:21.831 { 00:08:21.831 "name": "BaseBdev2", 00:08:21.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.831 "is_configured": false, 00:08:21.831 "data_offset": 0, 00:08:21.831 "data_size": 0 00:08:21.831 } 00:08:21.831 ] 00:08:21.831 }' 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.831 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 [2024-09-28 16:09:36.823242] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.401 [2024-09-28 16:09:36.823385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 [2024-09-28 16:09:36.835171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.401 [2024-09-28 16:09:36.835280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.401 [2024-09-28 16:09:36.835333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.401 [2024-09-28 16:09:36.835361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 [2024-09-28 16:09:36.898503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.401 BaseBdev1 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.401 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.402 [ 00:08:22.402 { 00:08:22.402 "name": "BaseBdev1", 00:08:22.402 "aliases": [ 00:08:22.402 "fc6d75b7-a909-4d97-a6f7-2ea48d33d427" 00:08:22.402 ], 00:08:22.402 "product_name": "Malloc disk", 00:08:22.402 "block_size": 512, 00:08:22.402 "num_blocks": 65536, 00:08:22.402 "uuid": "fc6d75b7-a909-4d97-a6f7-2ea48d33d427", 00:08:22.402 "assigned_rate_limits": { 00:08:22.402 "rw_ios_per_sec": 0, 00:08:22.402 "rw_mbytes_per_sec": 0, 00:08:22.402 "r_mbytes_per_sec": 0, 00:08:22.402 "w_mbytes_per_sec": 0 00:08:22.402 }, 00:08:22.402 "claimed": true, 00:08:22.402 "claim_type": "exclusive_write", 00:08:22.402 "zoned": false, 00:08:22.402 "supported_io_types": { 00:08:22.402 "read": true, 00:08:22.402 "write": true, 00:08:22.402 "unmap": true, 00:08:22.402 "flush": true, 00:08:22.402 "reset": true, 00:08:22.402 "nvme_admin": false, 00:08:22.402 "nvme_io": false, 00:08:22.402 "nvme_io_md": false, 00:08:22.402 "write_zeroes": true, 00:08:22.402 "zcopy": true, 00:08:22.402 "get_zone_info": false, 00:08:22.402 "zone_management": false, 00:08:22.402 "zone_append": false, 00:08:22.402 "compare": false, 00:08:22.402 "compare_and_write": false, 00:08:22.402 "abort": true, 00:08:22.402 "seek_hole": false, 00:08:22.402 "seek_data": false, 00:08:22.402 "copy": true, 00:08:22.402 "nvme_iov_md": false 00:08:22.402 }, 00:08:22.402 "memory_domains": [ 00:08:22.402 { 00:08:22.402 "dma_device_id": "system", 00:08:22.402 "dma_device_type": 1 00:08:22.402 }, 00:08:22.402 { 00:08:22.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.402 "dma_device_type": 2 00:08:22.402 } 00:08:22.402 ], 00:08:22.402 "driver_specific": {} 00:08:22.402 } 00:08:22.402 ] 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.402 "name": "Existed_Raid", 00:08:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.402 "strip_size_kb": 0, 00:08:22.402 "state": "configuring", 00:08:22.402 "raid_level": "raid1", 00:08:22.402 "superblock": false, 00:08:22.402 "num_base_bdevs": 2, 00:08:22.402 "num_base_bdevs_discovered": 1, 00:08:22.402 "num_base_bdevs_operational": 2, 00:08:22.402 "base_bdevs_list": [ 00:08:22.402 { 00:08:22.402 "name": "BaseBdev1", 00:08:22.402 "uuid": "fc6d75b7-a909-4d97-a6f7-2ea48d33d427", 00:08:22.402 "is_configured": true, 00:08:22.402 "data_offset": 0, 00:08:22.402 "data_size": 65536 00:08:22.402 }, 00:08:22.402 { 00:08:22.402 "name": "BaseBdev2", 00:08:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.402 "is_configured": false, 00:08:22.402 "data_offset": 0, 00:08:22.402 "data_size": 0 00:08:22.402 } 00:08:22.402 ] 00:08:22.402 }' 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.402 16:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.972 [2024-09-28 16:09:37.361785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.972 [2024-09-28 16:09:37.361933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.972 [2024-09-28 16:09:37.369762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.972 [2024-09-28 16:09:37.372000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.972 [2024-09-28 16:09:37.372092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.972 "name": "Existed_Raid", 00:08:22.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.972 "strip_size_kb": 0, 00:08:22.972 "state": "configuring", 00:08:22.972 "raid_level": "raid1", 00:08:22.972 "superblock": false, 00:08:22.972 "num_base_bdevs": 2, 00:08:22.972 "num_base_bdevs_discovered": 1, 00:08:22.972 "num_base_bdevs_operational": 2, 00:08:22.972 "base_bdevs_list": [ 00:08:22.972 { 00:08:22.972 "name": "BaseBdev1", 00:08:22.972 "uuid": "fc6d75b7-a909-4d97-a6f7-2ea48d33d427", 00:08:22.972 "is_configured": true, 00:08:22.972 "data_offset": 0, 00:08:22.972 "data_size": 65536 00:08:22.972 }, 00:08:22.972 { 00:08:22.972 "name": "BaseBdev2", 00:08:22.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.972 "is_configured": false, 00:08:22.972 "data_offset": 0, 00:08:22.972 "data_size": 0 00:08:22.972 } 00:08:22.972 ] 00:08:22.972 }' 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.972 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.232 [2024-09-28 16:09:37.902417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.232 [2024-09-28 16:09:37.902479] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.232 [2024-09-28 16:09:37.902492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:23.232 [2024-09-28 16:09:37.902787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:23.232 [2024-09-28 16:09:37.902984] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.232 [2024-09-28 16:09:37.902998] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:23.232 [2024-09-28 16:09:37.903291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.232 BaseBdev2 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:23.232 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.233 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.493 [ 00:08:23.493 { 00:08:23.493 "name": "BaseBdev2", 00:08:23.493 "aliases": [ 00:08:23.493 "9c919ca9-32d6-45c9-8562-55fccd0a989c" 00:08:23.493 ], 00:08:23.493 "product_name": "Malloc disk", 00:08:23.493 "block_size": 512, 00:08:23.493 "num_blocks": 65536, 00:08:23.493 "uuid": "9c919ca9-32d6-45c9-8562-55fccd0a989c", 00:08:23.493 "assigned_rate_limits": { 00:08:23.493 "rw_ios_per_sec": 0, 00:08:23.493 "rw_mbytes_per_sec": 0, 00:08:23.493 "r_mbytes_per_sec": 0, 00:08:23.493 "w_mbytes_per_sec": 0 00:08:23.493 }, 00:08:23.493 "claimed": true, 00:08:23.493 "claim_type": "exclusive_write", 00:08:23.493 "zoned": false, 00:08:23.493 "supported_io_types": { 00:08:23.493 "read": true, 00:08:23.493 "write": true, 00:08:23.493 "unmap": true, 00:08:23.493 "flush": true, 00:08:23.493 "reset": true, 00:08:23.493 "nvme_admin": false, 00:08:23.493 "nvme_io": false, 00:08:23.493 "nvme_io_md": false, 00:08:23.493 "write_zeroes": true, 00:08:23.493 "zcopy": true, 00:08:23.493 "get_zone_info": false, 00:08:23.493 "zone_management": false, 00:08:23.493 "zone_append": false, 00:08:23.493 "compare": false, 00:08:23.493 "compare_and_write": false, 00:08:23.493 "abort": true, 00:08:23.493 "seek_hole": false, 00:08:23.493 "seek_data": false, 00:08:23.493 "copy": true, 00:08:23.493 "nvme_iov_md": false 00:08:23.493 }, 00:08:23.493 "memory_domains": [ 00:08:23.493 { 00:08:23.493 "dma_device_id": "system", 00:08:23.493 "dma_device_type": 1 00:08:23.493 }, 00:08:23.493 { 00:08:23.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.493 "dma_device_type": 2 00:08:23.493 } 00:08:23.493 ], 00:08:23.493 "driver_specific": {} 00:08:23.493 } 00:08:23.493 ] 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.493 "name": "Existed_Raid", 00:08:23.493 "uuid": "65e5e068-0a87-491c-ab29-c8ae26e499db", 00:08:23.493 "strip_size_kb": 0, 00:08:23.493 "state": "online", 00:08:23.493 "raid_level": "raid1", 00:08:23.493 "superblock": false, 00:08:23.493 "num_base_bdevs": 2, 00:08:23.493 "num_base_bdevs_discovered": 2, 00:08:23.493 "num_base_bdevs_operational": 2, 00:08:23.493 "base_bdevs_list": [ 00:08:23.493 { 00:08:23.493 "name": "BaseBdev1", 00:08:23.493 "uuid": "fc6d75b7-a909-4d97-a6f7-2ea48d33d427", 00:08:23.493 "is_configured": true, 00:08:23.493 "data_offset": 0, 00:08:23.493 "data_size": 65536 00:08:23.493 }, 00:08:23.493 { 00:08:23.493 "name": "BaseBdev2", 00:08:23.493 "uuid": "9c919ca9-32d6-45c9-8562-55fccd0a989c", 00:08:23.493 "is_configured": true, 00:08:23.493 "data_offset": 0, 00:08:23.493 "data_size": 65536 00:08:23.493 } 00:08:23.493 ] 00:08:23.493 }' 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.493 16:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.752 [2024-09-28 16:09:38.381982] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.752 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.752 "name": "Existed_Raid", 00:08:23.752 "aliases": [ 00:08:23.752 "65e5e068-0a87-491c-ab29-c8ae26e499db" 00:08:23.752 ], 00:08:23.752 "product_name": "Raid Volume", 00:08:23.752 "block_size": 512, 00:08:23.752 "num_blocks": 65536, 00:08:23.752 "uuid": "65e5e068-0a87-491c-ab29-c8ae26e499db", 00:08:23.752 "assigned_rate_limits": { 00:08:23.752 "rw_ios_per_sec": 0, 00:08:23.752 "rw_mbytes_per_sec": 0, 00:08:23.752 "r_mbytes_per_sec": 0, 00:08:23.752 "w_mbytes_per_sec": 0 00:08:23.752 }, 00:08:23.752 "claimed": false, 00:08:23.752 "zoned": false, 00:08:23.752 "supported_io_types": { 00:08:23.752 "read": true, 00:08:23.752 "write": true, 00:08:23.752 "unmap": false, 00:08:23.752 "flush": false, 00:08:23.752 "reset": true, 00:08:23.752 "nvme_admin": false, 00:08:23.752 "nvme_io": false, 00:08:23.752 "nvme_io_md": false, 00:08:23.752 "write_zeroes": true, 00:08:23.752 "zcopy": false, 00:08:23.752 "get_zone_info": false, 00:08:23.752 "zone_management": false, 00:08:23.752 "zone_append": false, 00:08:23.752 "compare": false, 00:08:23.752 "compare_and_write": false, 00:08:23.752 "abort": false, 00:08:23.752 "seek_hole": false, 00:08:23.752 "seek_data": false, 00:08:23.752 "copy": false, 00:08:23.752 "nvme_iov_md": false 00:08:23.752 }, 00:08:23.752 "memory_domains": [ 00:08:23.752 { 00:08:23.752 "dma_device_id": "system", 00:08:23.752 "dma_device_type": 1 00:08:23.752 }, 00:08:23.752 { 00:08:23.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.752 "dma_device_type": 2 00:08:23.752 }, 00:08:23.752 { 00:08:23.752 "dma_device_id": "system", 00:08:23.752 "dma_device_type": 1 00:08:23.752 }, 00:08:23.752 { 00:08:23.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.752 "dma_device_type": 2 00:08:23.752 } 00:08:23.752 ], 00:08:23.753 "driver_specific": { 00:08:23.753 "raid": { 00:08:23.753 "uuid": "65e5e068-0a87-491c-ab29-c8ae26e499db", 00:08:23.753 "strip_size_kb": 0, 00:08:23.753 "state": "online", 00:08:23.753 "raid_level": "raid1", 00:08:23.753 "superblock": false, 00:08:23.753 "num_base_bdevs": 2, 00:08:23.753 "num_base_bdevs_discovered": 2, 00:08:23.753 "num_base_bdevs_operational": 2, 00:08:23.753 "base_bdevs_list": [ 00:08:23.753 { 00:08:23.753 "name": "BaseBdev1", 00:08:23.753 "uuid": "fc6d75b7-a909-4d97-a6f7-2ea48d33d427", 00:08:23.753 "is_configured": true, 00:08:23.753 "data_offset": 0, 00:08:23.753 "data_size": 65536 00:08:23.753 }, 00:08:23.753 { 00:08:23.753 "name": "BaseBdev2", 00:08:23.753 "uuid": "9c919ca9-32d6-45c9-8562-55fccd0a989c", 00:08:23.753 "is_configured": true, 00:08:23.753 "data_offset": 0, 00:08:23.753 "data_size": 65536 00:08:23.753 } 00:08:23.753 ] 00:08:23.753 } 00:08:23.753 } 00:08:23.753 }' 00:08:23.753 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.012 BaseBdev2' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.012 [2024-09-28 16:09:38.593387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:24.012 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.272 "name": "Existed_Raid", 00:08:24.272 "uuid": "65e5e068-0a87-491c-ab29-c8ae26e499db", 00:08:24.272 "strip_size_kb": 0, 00:08:24.272 "state": "online", 00:08:24.272 "raid_level": "raid1", 00:08:24.272 "superblock": false, 00:08:24.272 "num_base_bdevs": 2, 00:08:24.272 "num_base_bdevs_discovered": 1, 00:08:24.272 "num_base_bdevs_operational": 1, 00:08:24.272 "base_bdevs_list": [ 00:08:24.272 { 00:08:24.272 "name": null, 00:08:24.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.272 "is_configured": false, 00:08:24.272 "data_offset": 0, 00:08:24.272 "data_size": 65536 00:08:24.272 }, 00:08:24.272 { 00:08:24.272 "name": "BaseBdev2", 00:08:24.272 "uuid": "9c919ca9-32d6-45c9-8562-55fccd0a989c", 00:08:24.272 "is_configured": true, 00:08:24.272 "data_offset": 0, 00:08:24.272 "data_size": 65536 00:08:24.272 } 00:08:24.272 ] 00:08:24.272 }' 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.272 16:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.532 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.532 [2024-09-28 16:09:39.175905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.532 [2024-09-28 16:09:39.176072] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.791 [2024-09-28 16:09:39.278614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.791 [2024-09-28 16:09:39.278725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.791 [2024-09-28 16:09:39.278769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:24.791 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.791 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.791 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62695 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62695 ']' 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62695 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62695 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62695' 00:08:24.792 killing process with pid 62695 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62695 00:08:24.792 [2024-09-28 16:09:39.367115] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.792 16:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62695 00:08:24.792 [2024-09-28 16:09:39.383909] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:26.174 00:08:26.174 real 0m5.318s 00:08:26.174 user 0m7.434s 00:08:26.174 sys 0m0.967s 00:08:26.174 ************************************ 00:08:26.174 END TEST raid_state_function_test 00:08:26.174 ************************************ 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.174 16:09:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:26.174 16:09:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:26.174 16:09:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.174 16:09:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.174 ************************************ 00:08:26.174 START TEST raid_state_function_test_sb 00:08:26.174 ************************************ 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:26.174 Process raid pid: 62948 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62948 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62948' 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62948 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62948 ']' 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.174 16:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.435 [2024-09-28 16:09:40.906098] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:26.435 [2024-09-28 16:09:40.906351] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.435 [2024-09-28 16:09:41.090159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.694 [2024-09-28 16:09:41.351997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.954 [2024-09-28 16:09:41.583120] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.954 [2024-09-28 16:09:41.583159] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.214 [2024-09-28 16:09:41.737517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.214 [2024-09-28 16:09:41.737641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.214 [2024-09-28 16:09:41.737675] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.214 [2024-09-28 16:09:41.737699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.214 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.214 "name": "Existed_Raid", 00:08:27.214 "uuid": "54cddc90-cc66-45be-ba2f-3ba9cd8480d5", 00:08:27.214 "strip_size_kb": 0, 00:08:27.214 "state": "configuring", 00:08:27.214 "raid_level": "raid1", 00:08:27.214 "superblock": true, 00:08:27.214 "num_base_bdevs": 2, 00:08:27.214 "num_base_bdevs_discovered": 0, 00:08:27.214 "num_base_bdevs_operational": 2, 00:08:27.214 "base_bdevs_list": [ 00:08:27.214 { 00:08:27.214 "name": "BaseBdev1", 00:08:27.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.214 "is_configured": false, 00:08:27.214 "data_offset": 0, 00:08:27.214 "data_size": 0 00:08:27.214 }, 00:08:27.214 { 00:08:27.214 "name": "BaseBdev2", 00:08:27.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.214 "is_configured": false, 00:08:27.214 "data_offset": 0, 00:08:27.214 "data_size": 0 00:08:27.214 } 00:08:27.214 ] 00:08:27.215 }' 00:08:27.215 16:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.215 16:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.798 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.798 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.798 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.798 [2024-09-28 16:09:42.196624] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.798 [2024-09-28 16:09:42.196701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:27.798 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.798 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.798 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.798 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 [2024-09-28 16:09:42.208631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.799 [2024-09-28 16:09:42.208725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.799 [2024-09-28 16:09:42.208753] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.799 [2024-09-28 16:09:42.208779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 [2024-09-28 16:09:42.269141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.799 BaseBdev1 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 [ 00:08:27.799 { 00:08:27.799 "name": "BaseBdev1", 00:08:27.799 "aliases": [ 00:08:27.799 "64736f22-d96b-4810-81ef-f1d852efe0e5" 00:08:27.799 ], 00:08:27.799 "product_name": "Malloc disk", 00:08:27.799 "block_size": 512, 00:08:27.799 "num_blocks": 65536, 00:08:27.799 "uuid": "64736f22-d96b-4810-81ef-f1d852efe0e5", 00:08:27.799 "assigned_rate_limits": { 00:08:27.799 "rw_ios_per_sec": 0, 00:08:27.799 "rw_mbytes_per_sec": 0, 00:08:27.799 "r_mbytes_per_sec": 0, 00:08:27.799 "w_mbytes_per_sec": 0 00:08:27.799 }, 00:08:27.799 "claimed": true, 00:08:27.799 "claim_type": "exclusive_write", 00:08:27.799 "zoned": false, 00:08:27.799 "supported_io_types": { 00:08:27.799 "read": true, 00:08:27.799 "write": true, 00:08:27.799 "unmap": true, 00:08:27.799 "flush": true, 00:08:27.799 "reset": true, 00:08:27.799 "nvme_admin": false, 00:08:27.799 "nvme_io": false, 00:08:27.799 "nvme_io_md": false, 00:08:27.799 "write_zeroes": true, 00:08:27.799 "zcopy": true, 00:08:27.799 "get_zone_info": false, 00:08:27.799 "zone_management": false, 00:08:27.799 "zone_append": false, 00:08:27.799 "compare": false, 00:08:27.799 "compare_and_write": false, 00:08:27.799 "abort": true, 00:08:27.799 "seek_hole": false, 00:08:27.799 "seek_data": false, 00:08:27.799 "copy": true, 00:08:27.799 "nvme_iov_md": false 00:08:27.799 }, 00:08:27.799 "memory_domains": [ 00:08:27.799 { 00:08:27.799 "dma_device_id": "system", 00:08:27.799 "dma_device_type": 1 00:08:27.799 }, 00:08:27.799 { 00:08:27.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.799 "dma_device_type": 2 00:08:27.799 } 00:08:27.799 ], 00:08:27.799 "driver_specific": {} 00:08:27.799 } 00:08:27.799 ] 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.799 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.799 "name": "Existed_Raid", 00:08:27.799 "uuid": "2bef2c47-24aa-436e-9ce2-a76869b2d4e1", 00:08:27.799 "strip_size_kb": 0, 00:08:27.799 "state": "configuring", 00:08:27.799 "raid_level": "raid1", 00:08:27.799 "superblock": true, 00:08:27.799 "num_base_bdevs": 2, 00:08:27.799 "num_base_bdevs_discovered": 1, 00:08:27.799 "num_base_bdevs_operational": 2, 00:08:27.799 "base_bdevs_list": [ 00:08:27.799 { 00:08:27.799 "name": "BaseBdev1", 00:08:27.799 "uuid": "64736f22-d96b-4810-81ef-f1d852efe0e5", 00:08:27.799 "is_configured": true, 00:08:27.799 "data_offset": 2048, 00:08:27.799 "data_size": 63488 00:08:27.799 }, 00:08:27.799 { 00:08:27.799 "name": "BaseBdev2", 00:08:27.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.799 "is_configured": false, 00:08:27.799 "data_offset": 0, 00:08:27.800 "data_size": 0 00:08:27.800 } 00:08:27.800 ] 00:08:27.800 }' 00:08:27.800 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.800 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.381 [2024-09-28 16:09:42.772283] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.381 [2024-09-28 16:09:42.772370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.381 [2024-09-28 16:09:42.784304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.381 [2024-09-28 16:09:42.786399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.381 [2024-09-28 16:09:42.786487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.381 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.382 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.382 "name": "Existed_Raid", 00:08:28.382 "uuid": "800e6498-718e-4a08-951c-6be70902af37", 00:08:28.382 "strip_size_kb": 0, 00:08:28.382 "state": "configuring", 00:08:28.382 "raid_level": "raid1", 00:08:28.382 "superblock": true, 00:08:28.382 "num_base_bdevs": 2, 00:08:28.382 "num_base_bdevs_discovered": 1, 00:08:28.382 "num_base_bdevs_operational": 2, 00:08:28.382 "base_bdevs_list": [ 00:08:28.382 { 00:08:28.382 "name": "BaseBdev1", 00:08:28.382 "uuid": "64736f22-d96b-4810-81ef-f1d852efe0e5", 00:08:28.382 "is_configured": true, 00:08:28.382 "data_offset": 2048, 00:08:28.382 "data_size": 63488 00:08:28.382 }, 00:08:28.382 { 00:08:28.382 "name": "BaseBdev2", 00:08:28.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.382 "is_configured": false, 00:08:28.382 "data_offset": 0, 00:08:28.382 "data_size": 0 00:08:28.382 } 00:08:28.382 ] 00:08:28.382 }' 00:08:28.382 16:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.382 16:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.641 [2024-09-28 16:09:43.298020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.641 [2024-09-28 16:09:43.298429] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.641 [2024-09-28 16:09:43.298489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:28.641 [2024-09-28 16:09:43.298819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:28.641 BaseBdev2 00:08:28.641 [2024-09-28 16:09:43.299032] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.641 [2024-09-28 16:09:43.299049] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:28.641 [2024-09-28 16:09:43.299210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.641 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.641 [ 00:08:28.641 { 00:08:28.641 "name": "BaseBdev2", 00:08:28.641 "aliases": [ 00:08:28.641 "b54314b6-f72a-4caa-b14d-f9fe7a71ace0" 00:08:28.641 ], 00:08:28.641 "product_name": "Malloc disk", 00:08:28.641 "block_size": 512, 00:08:28.641 "num_blocks": 65536, 00:08:28.641 "uuid": "b54314b6-f72a-4caa-b14d-f9fe7a71ace0", 00:08:28.899 "assigned_rate_limits": { 00:08:28.899 "rw_ios_per_sec": 0, 00:08:28.899 "rw_mbytes_per_sec": 0, 00:08:28.899 "r_mbytes_per_sec": 0, 00:08:28.899 "w_mbytes_per_sec": 0 00:08:28.899 }, 00:08:28.899 "claimed": true, 00:08:28.899 "claim_type": "exclusive_write", 00:08:28.899 "zoned": false, 00:08:28.899 "supported_io_types": { 00:08:28.899 "read": true, 00:08:28.899 "write": true, 00:08:28.899 "unmap": true, 00:08:28.899 "flush": true, 00:08:28.899 "reset": true, 00:08:28.899 "nvme_admin": false, 00:08:28.899 "nvme_io": false, 00:08:28.899 "nvme_io_md": false, 00:08:28.899 "write_zeroes": true, 00:08:28.899 "zcopy": true, 00:08:28.899 "get_zone_info": false, 00:08:28.899 "zone_management": false, 00:08:28.899 "zone_append": false, 00:08:28.899 "compare": false, 00:08:28.899 "compare_and_write": false, 00:08:28.899 "abort": true, 00:08:28.899 "seek_hole": false, 00:08:28.899 "seek_data": false, 00:08:28.899 "copy": true, 00:08:28.899 "nvme_iov_md": false 00:08:28.899 }, 00:08:28.899 "memory_domains": [ 00:08:28.899 { 00:08:28.899 "dma_device_id": "system", 00:08:28.899 "dma_device_type": 1 00:08:28.899 }, 00:08:28.899 { 00:08:28.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.899 "dma_device_type": 2 00:08:28.899 } 00:08:28.899 ], 00:08:28.899 "driver_specific": {} 00:08:28.899 } 00:08:28.899 ] 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.899 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.899 "name": "Existed_Raid", 00:08:28.899 "uuid": "800e6498-718e-4a08-951c-6be70902af37", 00:08:28.899 "strip_size_kb": 0, 00:08:28.899 "state": "online", 00:08:28.899 "raid_level": "raid1", 00:08:28.899 "superblock": true, 00:08:28.899 "num_base_bdevs": 2, 00:08:28.899 "num_base_bdevs_discovered": 2, 00:08:28.899 "num_base_bdevs_operational": 2, 00:08:28.899 "base_bdevs_list": [ 00:08:28.899 { 00:08:28.899 "name": "BaseBdev1", 00:08:28.899 "uuid": "64736f22-d96b-4810-81ef-f1d852efe0e5", 00:08:28.899 "is_configured": true, 00:08:28.899 "data_offset": 2048, 00:08:28.899 "data_size": 63488 00:08:28.899 }, 00:08:28.899 { 00:08:28.899 "name": "BaseBdev2", 00:08:28.899 "uuid": "b54314b6-f72a-4caa-b14d-f9fe7a71ace0", 00:08:28.899 "is_configured": true, 00:08:28.899 "data_offset": 2048, 00:08:28.899 "data_size": 63488 00:08:28.899 } 00:08:28.899 ] 00:08:28.899 }' 00:08:28.900 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.900 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.158 [2024-09-28 16:09:43.781489] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.158 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.159 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.159 "name": "Existed_Raid", 00:08:29.159 "aliases": [ 00:08:29.159 "800e6498-718e-4a08-951c-6be70902af37" 00:08:29.159 ], 00:08:29.159 "product_name": "Raid Volume", 00:08:29.159 "block_size": 512, 00:08:29.159 "num_blocks": 63488, 00:08:29.159 "uuid": "800e6498-718e-4a08-951c-6be70902af37", 00:08:29.159 "assigned_rate_limits": { 00:08:29.159 "rw_ios_per_sec": 0, 00:08:29.159 "rw_mbytes_per_sec": 0, 00:08:29.159 "r_mbytes_per_sec": 0, 00:08:29.159 "w_mbytes_per_sec": 0 00:08:29.159 }, 00:08:29.159 "claimed": false, 00:08:29.159 "zoned": false, 00:08:29.159 "supported_io_types": { 00:08:29.159 "read": true, 00:08:29.159 "write": true, 00:08:29.159 "unmap": false, 00:08:29.159 "flush": false, 00:08:29.159 "reset": true, 00:08:29.159 "nvme_admin": false, 00:08:29.159 "nvme_io": false, 00:08:29.159 "nvme_io_md": false, 00:08:29.159 "write_zeroes": true, 00:08:29.159 "zcopy": false, 00:08:29.159 "get_zone_info": false, 00:08:29.159 "zone_management": false, 00:08:29.159 "zone_append": false, 00:08:29.159 "compare": false, 00:08:29.159 "compare_and_write": false, 00:08:29.159 "abort": false, 00:08:29.159 "seek_hole": false, 00:08:29.159 "seek_data": false, 00:08:29.159 "copy": false, 00:08:29.159 "nvme_iov_md": false 00:08:29.159 }, 00:08:29.159 "memory_domains": [ 00:08:29.159 { 00:08:29.159 "dma_device_id": "system", 00:08:29.159 "dma_device_type": 1 00:08:29.159 }, 00:08:29.159 { 00:08:29.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.159 "dma_device_type": 2 00:08:29.159 }, 00:08:29.159 { 00:08:29.159 "dma_device_id": "system", 00:08:29.159 "dma_device_type": 1 00:08:29.159 }, 00:08:29.159 { 00:08:29.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.159 "dma_device_type": 2 00:08:29.159 } 00:08:29.159 ], 00:08:29.159 "driver_specific": { 00:08:29.159 "raid": { 00:08:29.159 "uuid": "800e6498-718e-4a08-951c-6be70902af37", 00:08:29.159 "strip_size_kb": 0, 00:08:29.159 "state": "online", 00:08:29.159 "raid_level": "raid1", 00:08:29.159 "superblock": true, 00:08:29.159 "num_base_bdevs": 2, 00:08:29.159 "num_base_bdevs_discovered": 2, 00:08:29.159 "num_base_bdevs_operational": 2, 00:08:29.159 "base_bdevs_list": [ 00:08:29.159 { 00:08:29.159 "name": "BaseBdev1", 00:08:29.159 "uuid": "64736f22-d96b-4810-81ef-f1d852efe0e5", 00:08:29.159 "is_configured": true, 00:08:29.159 "data_offset": 2048, 00:08:29.159 "data_size": 63488 00:08:29.159 }, 00:08:29.159 { 00:08:29.159 "name": "BaseBdev2", 00:08:29.159 "uuid": "b54314b6-f72a-4caa-b14d-f9fe7a71ace0", 00:08:29.159 "is_configured": true, 00:08:29.159 "data_offset": 2048, 00:08:29.159 "data_size": 63488 00:08:29.159 } 00:08:29.159 ] 00:08:29.159 } 00:08:29.159 } 00:08:29.159 }' 00:08:29.159 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:29.418 BaseBdev2' 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.418 16:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.418 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.418 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.418 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.418 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.418 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.418 [2024-09-28 16:09:44.016861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.677 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.677 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:29.677 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:29.677 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.677 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:29.677 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:29.677 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.678 "name": "Existed_Raid", 00:08:29.678 "uuid": "800e6498-718e-4a08-951c-6be70902af37", 00:08:29.678 "strip_size_kb": 0, 00:08:29.678 "state": "online", 00:08:29.678 "raid_level": "raid1", 00:08:29.678 "superblock": true, 00:08:29.678 "num_base_bdevs": 2, 00:08:29.678 "num_base_bdevs_discovered": 1, 00:08:29.678 "num_base_bdevs_operational": 1, 00:08:29.678 "base_bdevs_list": [ 00:08:29.678 { 00:08:29.678 "name": null, 00:08:29.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.678 "is_configured": false, 00:08:29.678 "data_offset": 0, 00:08:29.678 "data_size": 63488 00:08:29.678 }, 00:08:29.678 { 00:08:29.678 "name": "BaseBdev2", 00:08:29.678 "uuid": "b54314b6-f72a-4caa-b14d-f9fe7a71ace0", 00:08:29.678 "is_configured": true, 00:08:29.678 "data_offset": 2048, 00:08:29.678 "data_size": 63488 00:08:29.678 } 00:08:29.678 ] 00:08:29.678 }' 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.678 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.938 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.938 [2024-09-28 16:09:44.582994] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.938 [2024-09-28 16:09:44.583178] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.198 [2024-09-28 16:09:44.683419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.198 [2024-09-28 16:09:44.683588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.198 [2024-09-28 16:09:44.683633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62948 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62948 ']' 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62948 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62948 00:08:30.198 killing process with pid 62948 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62948' 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62948 00:08:30.198 [2024-09-28 16:09:44.782448] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.198 16:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62948 00:08:30.198 [2024-09-28 16:09:44.800090] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.579 16:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.579 00:08:31.579 real 0m5.323s 00:08:31.579 user 0m7.433s 00:08:31.579 sys 0m0.991s 00:08:31.579 16:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.579 ************************************ 00:08:31.579 END TEST raid_state_function_test_sb 00:08:31.579 ************************************ 00:08:31.579 16:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.579 16:09:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:31.579 16:09:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:31.579 16:09:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.579 16:09:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.579 ************************************ 00:08:31.579 START TEST raid_superblock_test 00:08:31.579 ************************************ 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63200 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63200 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63200 ']' 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.579 16:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 [2024-09-28 16:09:46.295495] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:31.839 [2024-09-28 16:09:46.295706] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63200 ] 00:08:31.839 [2024-09-28 16:09:46.457051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.099 [2024-09-28 16:09:46.690502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.359 [2024-09-28 16:09:46.919843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.359 [2024-09-28 16:09:46.919984] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.619 malloc1 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.619 [2024-09-28 16:09:47.177762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.619 [2024-09-28 16:09:47.177893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.619 [2024-09-28 16:09:47.177938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.619 [2024-09-28 16:09:47.177988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.619 [2024-09-28 16:09:47.180382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.619 [2024-09-28 16:09:47.180452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.619 pt1 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.619 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.620 malloc2 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.620 [2024-09-28 16:09:47.267964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.620 [2024-09-28 16:09:47.268061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.620 [2024-09-28 16:09:47.268117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:32.620 [2024-09-28 16:09:47.268159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.620 [2024-09-28 16:09:47.270513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.620 [2024-09-28 16:09:47.270577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.620 pt2 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.620 [2024-09-28 16:09:47.280022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.620 [2024-09-28 16:09:47.282141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.620 [2024-09-28 16:09:47.282366] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:32.620 [2024-09-28 16:09:47.282410] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.620 [2024-09-28 16:09:47.282656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:32.620 [2024-09-28 16:09:47.282865] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:32.620 [2024-09-28 16:09:47.282911] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:32.620 [2024-09-28 16:09:47.283086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.620 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.879 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.879 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.879 "name": "raid_bdev1", 00:08:32.879 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:32.879 "strip_size_kb": 0, 00:08:32.879 "state": "online", 00:08:32.879 "raid_level": "raid1", 00:08:32.879 "superblock": true, 00:08:32.879 "num_base_bdevs": 2, 00:08:32.879 "num_base_bdevs_discovered": 2, 00:08:32.879 "num_base_bdevs_operational": 2, 00:08:32.879 "base_bdevs_list": [ 00:08:32.879 { 00:08:32.879 "name": "pt1", 00:08:32.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.879 "is_configured": true, 00:08:32.879 "data_offset": 2048, 00:08:32.879 "data_size": 63488 00:08:32.879 }, 00:08:32.879 { 00:08:32.879 "name": "pt2", 00:08:32.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.879 "is_configured": true, 00:08:32.879 "data_offset": 2048, 00:08:32.879 "data_size": 63488 00:08:32.879 } 00:08:32.879 ] 00:08:32.879 }' 00:08:32.879 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.879 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.138 [2024-09-28 16:09:47.711496] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.138 "name": "raid_bdev1", 00:08:33.138 "aliases": [ 00:08:33.138 "6a3829b0-c026-4527-ad16-0617f07e78ee" 00:08:33.138 ], 00:08:33.138 "product_name": "Raid Volume", 00:08:33.138 "block_size": 512, 00:08:33.138 "num_blocks": 63488, 00:08:33.138 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:33.138 "assigned_rate_limits": { 00:08:33.138 "rw_ios_per_sec": 0, 00:08:33.138 "rw_mbytes_per_sec": 0, 00:08:33.138 "r_mbytes_per_sec": 0, 00:08:33.138 "w_mbytes_per_sec": 0 00:08:33.138 }, 00:08:33.138 "claimed": false, 00:08:33.138 "zoned": false, 00:08:33.138 "supported_io_types": { 00:08:33.138 "read": true, 00:08:33.138 "write": true, 00:08:33.138 "unmap": false, 00:08:33.138 "flush": false, 00:08:33.138 "reset": true, 00:08:33.138 "nvme_admin": false, 00:08:33.138 "nvme_io": false, 00:08:33.138 "nvme_io_md": false, 00:08:33.138 "write_zeroes": true, 00:08:33.138 "zcopy": false, 00:08:33.138 "get_zone_info": false, 00:08:33.138 "zone_management": false, 00:08:33.138 "zone_append": false, 00:08:33.138 "compare": false, 00:08:33.138 "compare_and_write": false, 00:08:33.138 "abort": false, 00:08:33.138 "seek_hole": false, 00:08:33.138 "seek_data": false, 00:08:33.138 "copy": false, 00:08:33.138 "nvme_iov_md": false 00:08:33.138 }, 00:08:33.138 "memory_domains": [ 00:08:33.138 { 00:08:33.138 "dma_device_id": "system", 00:08:33.138 "dma_device_type": 1 00:08:33.138 }, 00:08:33.138 { 00:08:33.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.138 "dma_device_type": 2 00:08:33.138 }, 00:08:33.138 { 00:08:33.138 "dma_device_id": "system", 00:08:33.138 "dma_device_type": 1 00:08:33.138 }, 00:08:33.138 { 00:08:33.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.138 "dma_device_type": 2 00:08:33.138 } 00:08:33.138 ], 00:08:33.138 "driver_specific": { 00:08:33.138 "raid": { 00:08:33.138 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:33.138 "strip_size_kb": 0, 00:08:33.138 "state": "online", 00:08:33.138 "raid_level": "raid1", 00:08:33.138 "superblock": true, 00:08:33.138 "num_base_bdevs": 2, 00:08:33.138 "num_base_bdevs_discovered": 2, 00:08:33.138 "num_base_bdevs_operational": 2, 00:08:33.138 "base_bdevs_list": [ 00:08:33.138 { 00:08:33.138 "name": "pt1", 00:08:33.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.138 "is_configured": true, 00:08:33.138 "data_offset": 2048, 00:08:33.138 "data_size": 63488 00:08:33.138 }, 00:08:33.138 { 00:08:33.138 "name": "pt2", 00:08:33.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.138 "is_configured": true, 00:08:33.138 "data_offset": 2048, 00:08:33.138 "data_size": 63488 00:08:33.138 } 00:08:33.138 ] 00:08:33.138 } 00:08:33.138 } 00:08:33.138 }' 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.138 pt2' 00:08:33.138 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 [2024-09-28 16:09:47.943038] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6a3829b0-c026-4527-ad16-0617f07e78ee 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6a3829b0-c026-4527-ad16-0617f07e78ee ']' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 [2024-09-28 16:09:47.990726] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.399 [2024-09-28 16:09:47.990795] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.399 [2024-09-28 16:09:47.990914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.399 [2024-09-28 16:09:47.991011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.399 [2024-09-28 16:09:47.991064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.660 [2024-09-28 16:09:48.134501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:33.660 [2024-09-28 16:09:48.136761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.660 [2024-09-28 16:09:48.136887] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:33.660 [2024-09-28 16:09:48.136976] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:33.660 [2024-09-28 16:09:48.137024] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.660 [2024-09-28 16:09:48.137057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:33.660 request: 00:08:33.660 { 00:08:33.660 "name": "raid_bdev1", 00:08:33.660 "raid_level": "raid1", 00:08:33.660 "base_bdevs": [ 00:08:33.660 "malloc1", 00:08:33.660 "malloc2" 00:08:33.660 ], 00:08:33.660 "superblock": false, 00:08:33.660 "method": "bdev_raid_create", 00:08:33.660 "req_id": 1 00:08:33.660 } 00:08:33.660 Got JSON-RPC error response 00:08:33.660 response: 00:08:33.660 { 00:08:33.660 "code": -17, 00:08:33.660 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:33.660 } 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.660 [2024-09-28 16:09:48.182390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.660 [2024-09-28 16:09:48.182491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.660 [2024-09-28 16:09:48.182523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:33.660 [2024-09-28 16:09:48.182553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.660 [2024-09-28 16:09:48.185044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.660 [2024-09-28 16:09:48.185131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.660 [2024-09-28 16:09:48.185223] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:33.660 [2024-09-28 16:09:48.185327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.660 pt1 00:08:33.660 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.661 "name": "raid_bdev1", 00:08:33.661 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:33.661 "strip_size_kb": 0, 00:08:33.661 "state": "configuring", 00:08:33.661 "raid_level": "raid1", 00:08:33.661 "superblock": true, 00:08:33.661 "num_base_bdevs": 2, 00:08:33.661 "num_base_bdevs_discovered": 1, 00:08:33.661 "num_base_bdevs_operational": 2, 00:08:33.661 "base_bdevs_list": [ 00:08:33.661 { 00:08:33.661 "name": "pt1", 00:08:33.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.661 "is_configured": true, 00:08:33.661 "data_offset": 2048, 00:08:33.661 "data_size": 63488 00:08:33.661 }, 00:08:33.661 { 00:08:33.661 "name": null, 00:08:33.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.661 "is_configured": false, 00:08:33.661 "data_offset": 2048, 00:08:33.661 "data_size": 63488 00:08:33.661 } 00:08:33.661 ] 00:08:33.661 }' 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.661 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.230 [2024-09-28 16:09:48.617713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.230 [2024-09-28 16:09:48.617851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.230 [2024-09-28 16:09:48.617896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:34.230 [2024-09-28 16:09:48.617930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.230 [2024-09-28 16:09:48.618506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.230 [2024-09-28 16:09:48.618575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.230 [2024-09-28 16:09:48.618702] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:34.230 [2024-09-28 16:09:48.618758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.230 [2024-09-28 16:09:48.618936] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:34.230 [2024-09-28 16:09:48.618978] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:34.230 [2024-09-28 16:09:48.619276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:34.230 [2024-09-28 16:09:48.619486] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:34.230 [2024-09-28 16:09:48.619527] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:34.230 [2024-09-28 16:09:48.619711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.230 pt2 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.230 "name": "raid_bdev1", 00:08:34.230 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:34.230 "strip_size_kb": 0, 00:08:34.230 "state": "online", 00:08:34.230 "raid_level": "raid1", 00:08:34.230 "superblock": true, 00:08:34.230 "num_base_bdevs": 2, 00:08:34.230 "num_base_bdevs_discovered": 2, 00:08:34.230 "num_base_bdevs_operational": 2, 00:08:34.230 "base_bdevs_list": [ 00:08:34.230 { 00:08:34.230 "name": "pt1", 00:08:34.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.230 "is_configured": true, 00:08:34.230 "data_offset": 2048, 00:08:34.230 "data_size": 63488 00:08:34.230 }, 00:08:34.230 { 00:08:34.230 "name": "pt2", 00:08:34.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.230 "is_configured": true, 00:08:34.230 "data_offset": 2048, 00:08:34.230 "data_size": 63488 00:08:34.230 } 00:08:34.230 ] 00:08:34.230 }' 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.230 16:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.490 [2024-09-28 16:09:49.045181] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.490 "name": "raid_bdev1", 00:08:34.490 "aliases": [ 00:08:34.490 "6a3829b0-c026-4527-ad16-0617f07e78ee" 00:08:34.490 ], 00:08:34.490 "product_name": "Raid Volume", 00:08:34.490 "block_size": 512, 00:08:34.490 "num_blocks": 63488, 00:08:34.490 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:34.490 "assigned_rate_limits": { 00:08:34.490 "rw_ios_per_sec": 0, 00:08:34.490 "rw_mbytes_per_sec": 0, 00:08:34.490 "r_mbytes_per_sec": 0, 00:08:34.490 "w_mbytes_per_sec": 0 00:08:34.490 }, 00:08:34.490 "claimed": false, 00:08:34.490 "zoned": false, 00:08:34.490 "supported_io_types": { 00:08:34.490 "read": true, 00:08:34.490 "write": true, 00:08:34.490 "unmap": false, 00:08:34.490 "flush": false, 00:08:34.490 "reset": true, 00:08:34.490 "nvme_admin": false, 00:08:34.490 "nvme_io": false, 00:08:34.490 "nvme_io_md": false, 00:08:34.490 "write_zeroes": true, 00:08:34.490 "zcopy": false, 00:08:34.490 "get_zone_info": false, 00:08:34.490 "zone_management": false, 00:08:34.490 "zone_append": false, 00:08:34.490 "compare": false, 00:08:34.490 "compare_and_write": false, 00:08:34.490 "abort": false, 00:08:34.490 "seek_hole": false, 00:08:34.490 "seek_data": false, 00:08:34.490 "copy": false, 00:08:34.490 "nvme_iov_md": false 00:08:34.490 }, 00:08:34.490 "memory_domains": [ 00:08:34.490 { 00:08:34.490 "dma_device_id": "system", 00:08:34.490 "dma_device_type": 1 00:08:34.490 }, 00:08:34.490 { 00:08:34.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.490 "dma_device_type": 2 00:08:34.490 }, 00:08:34.490 { 00:08:34.490 "dma_device_id": "system", 00:08:34.490 "dma_device_type": 1 00:08:34.490 }, 00:08:34.490 { 00:08:34.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.490 "dma_device_type": 2 00:08:34.490 } 00:08:34.490 ], 00:08:34.490 "driver_specific": { 00:08:34.490 "raid": { 00:08:34.490 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:34.490 "strip_size_kb": 0, 00:08:34.490 "state": "online", 00:08:34.490 "raid_level": "raid1", 00:08:34.490 "superblock": true, 00:08:34.490 "num_base_bdevs": 2, 00:08:34.490 "num_base_bdevs_discovered": 2, 00:08:34.490 "num_base_bdevs_operational": 2, 00:08:34.490 "base_bdevs_list": [ 00:08:34.490 { 00:08:34.490 "name": "pt1", 00:08:34.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.490 "is_configured": true, 00:08:34.490 "data_offset": 2048, 00:08:34.490 "data_size": 63488 00:08:34.490 }, 00:08:34.490 { 00:08:34.490 "name": "pt2", 00:08:34.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.490 "is_configured": true, 00:08:34.490 "data_offset": 2048, 00:08:34.490 "data_size": 63488 00:08:34.490 } 00:08:34.490 ] 00:08:34.490 } 00:08:34.490 } 00:08:34.490 }' 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:34.490 pt2' 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.490 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:34.751 [2024-09-28 16:09:49.248862] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6a3829b0-c026-4527-ad16-0617f07e78ee '!=' 6a3829b0-c026-4527-ad16-0617f07e78ee ']' 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.751 [2024-09-28 16:09:49.276599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.751 "name": "raid_bdev1", 00:08:34.751 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:34.751 "strip_size_kb": 0, 00:08:34.751 "state": "online", 00:08:34.751 "raid_level": "raid1", 00:08:34.751 "superblock": true, 00:08:34.751 "num_base_bdevs": 2, 00:08:34.751 "num_base_bdevs_discovered": 1, 00:08:34.751 "num_base_bdevs_operational": 1, 00:08:34.751 "base_bdevs_list": [ 00:08:34.751 { 00:08:34.751 "name": null, 00:08:34.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.751 "is_configured": false, 00:08:34.751 "data_offset": 0, 00:08:34.751 "data_size": 63488 00:08:34.751 }, 00:08:34.751 { 00:08:34.751 "name": "pt2", 00:08:34.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.751 "is_configured": true, 00:08:34.751 "data_offset": 2048, 00:08:34.751 "data_size": 63488 00:08:34.751 } 00:08:34.751 ] 00:08:34.751 }' 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.751 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.321 [2024-09-28 16:09:49.723785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.321 [2024-09-28 16:09:49.723855] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.321 [2024-09-28 16:09:49.723971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.321 [2024-09-28 16:09:49.724039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.321 [2024-09-28 16:09:49.724085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.321 [2024-09-28 16:09:49.799667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:35.321 [2024-09-28 16:09:49.799762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.321 [2024-09-28 16:09:49.799797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:35.321 [2024-09-28 16:09:49.799853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.321 [2024-09-28 16:09:49.802405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.321 [2024-09-28 16:09:49.802443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:35.321 [2024-09-28 16:09:49.802538] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:35.321 [2024-09-28 16:09:49.802591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.321 [2024-09-28 16:09:49.802700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.321 [2024-09-28 16:09:49.802712] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:35.321 [2024-09-28 16:09:49.802972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:35.321 [2024-09-28 16:09:49.803139] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.321 [2024-09-28 16:09:49.803149] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:35.321 [2024-09-28 16:09:49.803301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.321 pt2 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.321 "name": "raid_bdev1", 00:08:35.321 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:35.321 "strip_size_kb": 0, 00:08:35.321 "state": "online", 00:08:35.321 "raid_level": "raid1", 00:08:35.321 "superblock": true, 00:08:35.321 "num_base_bdevs": 2, 00:08:35.321 "num_base_bdevs_discovered": 1, 00:08:35.321 "num_base_bdevs_operational": 1, 00:08:35.321 "base_bdevs_list": [ 00:08:35.321 { 00:08:35.321 "name": null, 00:08:35.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.321 "is_configured": false, 00:08:35.321 "data_offset": 2048, 00:08:35.321 "data_size": 63488 00:08:35.321 }, 00:08:35.321 { 00:08:35.321 "name": "pt2", 00:08:35.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.321 "is_configured": true, 00:08:35.321 "data_offset": 2048, 00:08:35.321 "data_size": 63488 00:08:35.321 } 00:08:35.321 ] 00:08:35.321 }' 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.321 16:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.581 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.581 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.581 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.581 [2024-09-28 16:09:50.258952] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.581 [2024-09-28 16:09:50.259020] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.581 [2024-09-28 16:09:50.259113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.581 [2024-09-28 16:09:50.259174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.581 [2024-09-28 16:09:50.259251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:35.581 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.841 [2024-09-28 16:09:50.318844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.841 [2024-09-28 16:09:50.318952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.841 [2024-09-28 16:09:50.318987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:35.841 [2024-09-28 16:09:50.319014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.841 [2024-09-28 16:09:50.321473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.841 [2024-09-28 16:09:50.321553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.841 [2024-09-28 16:09:50.321650] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:35.841 [2024-09-28 16:09:50.321725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.841 [2024-09-28 16:09:50.321887] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:35.841 [2024-09-28 16:09:50.321937] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.841 [2024-09-28 16:09:50.321976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:35.841 [2024-09-28 16:09:50.322076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.841 [2024-09-28 16:09:50.322183] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:35.841 [2024-09-28 16:09:50.322218] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:35.841 [2024-09-28 16:09:50.322496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:35.841 [2024-09-28 16:09:50.322672] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:35.841 [2024-09-28 16:09:50.322715] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:35.841 [2024-09-28 16:09:50.322936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.841 pt1 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.841 "name": "raid_bdev1", 00:08:35.841 "uuid": "6a3829b0-c026-4527-ad16-0617f07e78ee", 00:08:35.841 "strip_size_kb": 0, 00:08:35.841 "state": "online", 00:08:35.841 "raid_level": "raid1", 00:08:35.841 "superblock": true, 00:08:35.841 "num_base_bdevs": 2, 00:08:35.841 "num_base_bdevs_discovered": 1, 00:08:35.841 "num_base_bdevs_operational": 1, 00:08:35.841 "base_bdevs_list": [ 00:08:35.841 { 00:08:35.841 "name": null, 00:08:35.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.841 "is_configured": false, 00:08:35.841 "data_offset": 2048, 00:08:35.841 "data_size": 63488 00:08:35.841 }, 00:08:35.841 { 00:08:35.841 "name": "pt2", 00:08:35.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.841 "is_configured": true, 00:08:35.841 "data_offset": 2048, 00:08:35.841 "data_size": 63488 00:08:35.841 } 00:08:35.841 ] 00:08:35.841 }' 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.841 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.100 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:36.100 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.100 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.100 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.358 [2024-09-28 16:09:50.834256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6a3829b0-c026-4527-ad16-0617f07e78ee '!=' 6a3829b0-c026-4527-ad16-0617f07e78ee ']' 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63200 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63200 ']' 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63200 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63200 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63200' 00:08:36.358 killing process with pid 63200 00:08:36.358 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63200 00:08:36.358 [2024-09-28 16:09:50.896525] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.358 [2024-09-28 16:09:50.896650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.358 [2024-09-28 16:09:50.896721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.358 [2024-09-28 16:09:50.896777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 16:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63200 00:08:36.358 te offline 00:08:36.616 [2024-09-28 16:09:51.112068] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.997 16:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:37.997 00:08:37.997 real 0m6.225s 00:08:37.997 user 0m9.146s 00:08:37.997 sys 0m1.150s 00:08:37.997 16:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.997 16:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.997 ************************************ 00:08:37.997 END TEST raid_superblock_test 00:08:37.997 ************************************ 00:08:37.997 16:09:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:37.997 16:09:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:37.997 16:09:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.997 16:09:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.997 ************************************ 00:08:37.997 START TEST raid_read_error_test 00:08:37.997 ************************************ 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lzLoa8bq6f 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63530 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63530 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63530 ']' 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.997 16:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.997 [2024-09-28 16:09:52.605259] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:37.997 [2024-09-28 16:09:52.605451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63530 ] 00:08:38.255 [2024-09-28 16:09:52.774803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.515 [2024-09-28 16:09:53.023931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.775 [2024-09-28 16:09:53.259544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.775 [2024-09-28 16:09:53.259583] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.775 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.775 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:38.775 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.775 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:38.775 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.775 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.034 BaseBdev1_malloc 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.034 true 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.034 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.034 [2024-09-28 16:09:53.493510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.034 [2024-09-28 16:09:53.493617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.035 [2024-09-28 16:09:53.493653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.035 [2024-09-28 16:09:53.493684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.035 [2024-09-28 16:09:53.496059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.035 [2024-09-28 16:09:53.496134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.035 BaseBdev1 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.035 BaseBdev2_malloc 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.035 true 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.035 [2024-09-28 16:09:53.586241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.035 [2024-09-28 16:09:53.586354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.035 [2024-09-28 16:09:53.586375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:39.035 [2024-09-28 16:09:53.586386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.035 [2024-09-28 16:09:53.588767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.035 [2024-09-28 16:09:53.588804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.035 BaseBdev2 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.035 [2024-09-28 16:09:53.598289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.035 [2024-09-28 16:09:53.600406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.035 [2024-09-28 16:09:53.600653] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:39.035 [2024-09-28 16:09:53.600708] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:39.035 [2024-09-28 16:09:53.600972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:39.035 [2024-09-28 16:09:53.601188] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:39.035 [2024-09-28 16:09:53.601246] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:39.035 [2024-09-28 16:09:53.601432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.035 "name": "raid_bdev1", 00:08:39.035 "uuid": "9c0ffa7e-9c55-481a-844a-a00ff4896981", 00:08:39.035 "strip_size_kb": 0, 00:08:39.035 "state": "online", 00:08:39.035 "raid_level": "raid1", 00:08:39.035 "superblock": true, 00:08:39.035 "num_base_bdevs": 2, 00:08:39.035 "num_base_bdevs_discovered": 2, 00:08:39.035 "num_base_bdevs_operational": 2, 00:08:39.035 "base_bdevs_list": [ 00:08:39.035 { 00:08:39.035 "name": "BaseBdev1", 00:08:39.035 "uuid": "1929095a-36a7-59b1-b4cb-7f3cea32092e", 00:08:39.035 "is_configured": true, 00:08:39.035 "data_offset": 2048, 00:08:39.035 "data_size": 63488 00:08:39.035 }, 00:08:39.035 { 00:08:39.035 "name": "BaseBdev2", 00:08:39.035 "uuid": "776180e9-4d9f-575a-8e01-502c8aa11802", 00:08:39.035 "is_configured": true, 00:08:39.035 "data_offset": 2048, 00:08:39.035 "data_size": 63488 00:08:39.035 } 00:08:39.035 ] 00:08:39.035 }' 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.035 16:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.603 16:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:39.603 16:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:39.603 [2024-09-28 16:09:54.134809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.542 "name": "raid_bdev1", 00:08:40.542 "uuid": "9c0ffa7e-9c55-481a-844a-a00ff4896981", 00:08:40.542 "strip_size_kb": 0, 00:08:40.542 "state": "online", 00:08:40.542 "raid_level": "raid1", 00:08:40.542 "superblock": true, 00:08:40.542 "num_base_bdevs": 2, 00:08:40.542 "num_base_bdevs_discovered": 2, 00:08:40.542 "num_base_bdevs_operational": 2, 00:08:40.542 "base_bdevs_list": [ 00:08:40.542 { 00:08:40.542 "name": "BaseBdev1", 00:08:40.542 "uuid": "1929095a-36a7-59b1-b4cb-7f3cea32092e", 00:08:40.542 "is_configured": true, 00:08:40.542 "data_offset": 2048, 00:08:40.542 "data_size": 63488 00:08:40.542 }, 00:08:40.542 { 00:08:40.542 "name": "BaseBdev2", 00:08:40.542 "uuid": "776180e9-4d9f-575a-8e01-502c8aa11802", 00:08:40.542 "is_configured": true, 00:08:40.542 "data_offset": 2048, 00:08:40.542 "data_size": 63488 00:08:40.542 } 00:08:40.542 ] 00:08:40.542 }' 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.542 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.801 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:40.801 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.801 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.060 [2024-09-28 16:09:55.488159] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.060 [2024-09-28 16:09:55.488293] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.060 [2024-09-28 16:09:55.490918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.060 [2024-09-28 16:09:55.491025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.060 [2024-09-28 16:09:55.491135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.060 [2024-09-28 16:09:55.491198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:41.060 { 00:08:41.060 "results": [ 00:08:41.060 { 00:08:41.060 "job": "raid_bdev1", 00:08:41.060 "core_mask": "0x1", 00:08:41.060 "workload": "randrw", 00:08:41.060 "percentage": 50, 00:08:41.060 "status": "finished", 00:08:41.060 "queue_depth": 1, 00:08:41.060 "io_size": 131072, 00:08:41.060 "runtime": 1.35411, 00:08:41.060 "iops": 15303.040373381778, 00:08:41.060 "mibps": 1912.8800466727223, 00:08:41.060 "io_failed": 0, 00:08:41.060 "io_timeout": 0, 00:08:41.060 "avg_latency_us": 62.987865901227686, 00:08:41.060 "min_latency_us": 21.910917030567685, 00:08:41.060 "max_latency_us": 1266.3615720524017 00:08:41.060 } 00:08:41.060 ], 00:08:41.060 "core_count": 1 00:08:41.060 } 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63530 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63530 ']' 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63530 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63530 00:08:41.060 killing process with pid 63530 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63530' 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63530 00:08:41.060 [2024-09-28 16:09:55.538921] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.060 16:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63530 00:08:41.060 [2024-09-28 16:09:55.682346] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lzLoa8bq6f 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:42.442 ************************************ 00:08:42.442 END TEST raid_read_error_test 00:08:42.442 ************************************ 00:08:42.442 00:08:42.442 real 0m4.569s 00:08:42.442 user 0m5.293s 00:08:42.442 sys 0m0.664s 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.442 16:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.442 16:09:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:42.442 16:09:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:42.703 16:09:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.703 16:09:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.703 ************************************ 00:08:42.703 START TEST raid_write_error_test 00:08:42.703 ************************************ 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jqn3jEFn8w 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63680 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63680 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63680 ']' 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.703 16:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.703 [2024-09-28 16:09:57.258818] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:42.703 [2024-09-28 16:09:57.258959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63680 ] 00:08:42.963 [2024-09-28 16:09:57.428606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.223 [2024-09-28 16:09:57.682092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.223 [2024-09-28 16:09:57.905530] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.223 [2024-09-28 16:09:57.905573] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.482 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.482 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:43.482 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.482 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:43.482 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.482 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.482 BaseBdev1_malloc 00:08:43.482 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.483 true 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.483 [2024-09-28 16:09:58.153944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:43.483 [2024-09-28 16:09:58.154001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.483 [2024-09-28 16:09:58.154019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:43.483 [2024-09-28 16:09:58.154030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.483 [2024-09-28 16:09:58.156436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.483 [2024-09-28 16:09:58.156475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:43.483 BaseBdev1 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.483 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.742 BaseBdev2_malloc 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.742 true 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.742 [2024-09-28 16:09:58.236189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:43.742 [2024-09-28 16:09:58.236251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.742 [2024-09-28 16:09:58.236270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:43.742 [2024-09-28 16:09:58.236283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.742 [2024-09-28 16:09:58.238592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.742 [2024-09-28 16:09:58.238669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:43.742 BaseBdev2 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.742 [2024-09-28 16:09:58.248234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.742 [2024-09-28 16:09:58.250341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.742 [2024-09-28 16:09:58.250519] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.742 [2024-09-28 16:09:58.250534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.742 [2024-09-28 16:09:58.250789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.742 [2024-09-28 16:09:58.250981] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.742 [2024-09-28 16:09:58.250997] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:43.742 [2024-09-28 16:09:58.251147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.742 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.742 "name": "raid_bdev1", 00:08:43.742 "uuid": "ac737728-a6d3-4f87-a70a-047ebc24e9fa", 00:08:43.742 "strip_size_kb": 0, 00:08:43.742 "state": "online", 00:08:43.742 "raid_level": "raid1", 00:08:43.742 "superblock": true, 00:08:43.742 "num_base_bdevs": 2, 00:08:43.742 "num_base_bdevs_discovered": 2, 00:08:43.742 "num_base_bdevs_operational": 2, 00:08:43.742 "base_bdevs_list": [ 00:08:43.742 { 00:08:43.742 "name": "BaseBdev1", 00:08:43.742 "uuid": "d77caeb3-a9c6-542e-89a6-00b9a86ac2ec", 00:08:43.742 "is_configured": true, 00:08:43.742 "data_offset": 2048, 00:08:43.742 "data_size": 63488 00:08:43.742 }, 00:08:43.742 { 00:08:43.743 "name": "BaseBdev2", 00:08:43.743 "uuid": "c62a251b-1327-5e8c-9d3a-8556e425a9b4", 00:08:43.743 "is_configured": true, 00:08:43.743 "data_offset": 2048, 00:08:43.743 "data_size": 63488 00:08:43.743 } 00:08:43.743 ] 00:08:43.743 }' 00:08:43.743 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.743 16:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.344 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:44.345 16:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:44.345 [2024-09-28 16:09:58.800807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.287 [2024-09-28 16:09:59.719157] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:45.287 [2024-09-28 16:09:59.719335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.287 [2024-09-28 16:09:59.719602] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.287 "name": "raid_bdev1", 00:08:45.287 "uuid": "ac737728-a6d3-4f87-a70a-047ebc24e9fa", 00:08:45.287 "strip_size_kb": 0, 00:08:45.287 "state": "online", 00:08:45.287 "raid_level": "raid1", 00:08:45.287 "superblock": true, 00:08:45.287 "num_base_bdevs": 2, 00:08:45.287 "num_base_bdevs_discovered": 1, 00:08:45.287 "num_base_bdevs_operational": 1, 00:08:45.287 "base_bdevs_list": [ 00:08:45.287 { 00:08:45.287 "name": null, 00:08:45.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.287 "is_configured": false, 00:08:45.287 "data_offset": 0, 00:08:45.287 "data_size": 63488 00:08:45.287 }, 00:08:45.287 { 00:08:45.287 "name": "BaseBdev2", 00:08:45.287 "uuid": "c62a251b-1327-5e8c-9d3a-8556e425a9b4", 00:08:45.287 "is_configured": true, 00:08:45.287 "data_offset": 2048, 00:08:45.287 "data_size": 63488 00:08:45.287 } 00:08:45.287 ] 00:08:45.287 }' 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.287 16:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.546 [2024-09-28 16:10:00.220165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.546 [2024-09-28 16:10:00.220210] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.546 [2024-09-28 16:10:00.222571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.546 [2024-09-28 16:10:00.222611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.546 [2024-09-28 16:10:00.222665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.546 [2024-09-28 16:10:00.222674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:45.546 { 00:08:45.546 "results": [ 00:08:45.546 { 00:08:45.546 "job": "raid_bdev1", 00:08:45.546 "core_mask": "0x1", 00:08:45.546 "workload": "randrw", 00:08:45.546 "percentage": 50, 00:08:45.546 "status": "finished", 00:08:45.546 "queue_depth": 1, 00:08:45.546 "io_size": 131072, 00:08:45.546 "runtime": 1.419839, 00:08:45.546 "iops": 19347.26402077982, 00:08:45.546 "mibps": 2418.4080025974777, 00:08:45.546 "io_failed": 0, 00:08:45.546 "io_timeout": 0, 00:08:45.546 "avg_latency_us": 49.266075035409806, 00:08:45.546 "min_latency_us": 21.575545851528386, 00:08:45.546 "max_latency_us": 1266.3615720524017 00:08:45.546 } 00:08:45.546 ], 00:08:45.546 "core_count": 1 00:08:45.546 } 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63680 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63680 ']' 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63680 00:08:45.546 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:45.806 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.806 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63680 00:08:45.806 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.806 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.806 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63680' 00:08:45.806 killing process with pid 63680 00:08:45.806 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63680 00:08:45.806 [2024-09-28 16:10:00.272213] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.806 16:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63680 00:08:45.806 [2024-09-28 16:10:00.417819] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jqn3jEFn8w 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:47.187 ************************************ 00:08:47.187 END TEST raid_write_error_test 00:08:47.187 ************************************ 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:47.187 00:08:47.187 real 0m4.671s 00:08:47.187 user 0m5.431s 00:08:47.187 sys 0m0.690s 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.187 16:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.447 16:10:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:47.447 16:10:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:47.447 16:10:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:47.447 16:10:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.447 16:10:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.447 16:10:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.447 ************************************ 00:08:47.447 START TEST raid_state_function_test 00:08:47.447 ************************************ 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:47.447 Process raid pid: 63822 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63822 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63822' 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63822 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63822 ']' 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.447 16:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.447 [2024-09-28 16:10:01.992755] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:47.447 [2024-09-28 16:10:01.992954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.707 [2024-09-28 16:10:02.169702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.967 [2024-09-28 16:10:02.413151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.967 [2024-09-28 16:10:02.650237] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.967 [2024-09-28 16:10:02.650335] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.227 [2024-09-28 16:10:02.812483] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.227 [2024-09-28 16:10:02.812544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.227 [2024-09-28 16:10:02.812554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.227 [2024-09-28 16:10:02.812564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.227 [2024-09-28 16:10:02.812570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.227 [2024-09-28 16:10:02.812581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.227 "name": "Existed_Raid", 00:08:48.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.227 "strip_size_kb": 64, 00:08:48.227 "state": "configuring", 00:08:48.227 "raid_level": "raid0", 00:08:48.227 "superblock": false, 00:08:48.227 "num_base_bdevs": 3, 00:08:48.227 "num_base_bdevs_discovered": 0, 00:08:48.227 "num_base_bdevs_operational": 3, 00:08:48.227 "base_bdevs_list": [ 00:08:48.227 { 00:08:48.227 "name": "BaseBdev1", 00:08:48.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.227 "is_configured": false, 00:08:48.227 "data_offset": 0, 00:08:48.227 "data_size": 0 00:08:48.227 }, 00:08:48.227 { 00:08:48.227 "name": "BaseBdev2", 00:08:48.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.227 "is_configured": false, 00:08:48.227 "data_offset": 0, 00:08:48.227 "data_size": 0 00:08:48.227 }, 00:08:48.227 { 00:08:48.227 "name": "BaseBdev3", 00:08:48.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.227 "is_configured": false, 00:08:48.227 "data_offset": 0, 00:08:48.227 "data_size": 0 00:08:48.227 } 00:08:48.227 ] 00:08:48.227 }' 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.227 16:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.797 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.797 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.797 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.797 [2024-09-28 16:10:03.243659] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.797 [2024-09-28 16:10:03.243743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:48.797 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.797 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.797 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.797 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.797 [2024-09-28 16:10:03.255664] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.797 [2024-09-28 16:10:03.255757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.797 [2024-09-28 16:10:03.255782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.797 [2024-09-28 16:10:03.255804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.797 [2024-09-28 16:10:03.255821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.797 [2024-09-28 16:10:03.255841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.798 [2024-09-28 16:10:03.319493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.798 BaseBdev1 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.798 [ 00:08:48.798 { 00:08:48.798 "name": "BaseBdev1", 00:08:48.798 "aliases": [ 00:08:48.798 "c80051d9-30ce-4708-8181-585dce3cc92e" 00:08:48.798 ], 00:08:48.798 "product_name": "Malloc disk", 00:08:48.798 "block_size": 512, 00:08:48.798 "num_blocks": 65536, 00:08:48.798 "uuid": "c80051d9-30ce-4708-8181-585dce3cc92e", 00:08:48.798 "assigned_rate_limits": { 00:08:48.798 "rw_ios_per_sec": 0, 00:08:48.798 "rw_mbytes_per_sec": 0, 00:08:48.798 "r_mbytes_per_sec": 0, 00:08:48.798 "w_mbytes_per_sec": 0 00:08:48.798 }, 00:08:48.798 "claimed": true, 00:08:48.798 "claim_type": "exclusive_write", 00:08:48.798 "zoned": false, 00:08:48.798 "supported_io_types": { 00:08:48.798 "read": true, 00:08:48.798 "write": true, 00:08:48.798 "unmap": true, 00:08:48.798 "flush": true, 00:08:48.798 "reset": true, 00:08:48.798 "nvme_admin": false, 00:08:48.798 "nvme_io": false, 00:08:48.798 "nvme_io_md": false, 00:08:48.798 "write_zeroes": true, 00:08:48.798 "zcopy": true, 00:08:48.798 "get_zone_info": false, 00:08:48.798 "zone_management": false, 00:08:48.798 "zone_append": false, 00:08:48.798 "compare": false, 00:08:48.798 "compare_and_write": false, 00:08:48.798 "abort": true, 00:08:48.798 "seek_hole": false, 00:08:48.798 "seek_data": false, 00:08:48.798 "copy": true, 00:08:48.798 "nvme_iov_md": false 00:08:48.798 }, 00:08:48.798 "memory_domains": [ 00:08:48.798 { 00:08:48.798 "dma_device_id": "system", 00:08:48.798 "dma_device_type": 1 00:08:48.798 }, 00:08:48.798 { 00:08:48.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.798 "dma_device_type": 2 00:08:48.798 } 00:08:48.798 ], 00:08:48.798 "driver_specific": {} 00:08:48.798 } 00:08:48.798 ] 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.798 "name": "Existed_Raid", 00:08:48.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.798 "strip_size_kb": 64, 00:08:48.798 "state": "configuring", 00:08:48.798 "raid_level": "raid0", 00:08:48.798 "superblock": false, 00:08:48.798 "num_base_bdevs": 3, 00:08:48.798 "num_base_bdevs_discovered": 1, 00:08:48.798 "num_base_bdevs_operational": 3, 00:08:48.798 "base_bdevs_list": [ 00:08:48.798 { 00:08:48.798 "name": "BaseBdev1", 00:08:48.798 "uuid": "c80051d9-30ce-4708-8181-585dce3cc92e", 00:08:48.798 "is_configured": true, 00:08:48.798 "data_offset": 0, 00:08:48.798 "data_size": 65536 00:08:48.798 }, 00:08:48.798 { 00:08:48.798 "name": "BaseBdev2", 00:08:48.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.798 "is_configured": false, 00:08:48.798 "data_offset": 0, 00:08:48.798 "data_size": 0 00:08:48.798 }, 00:08:48.798 { 00:08:48.798 "name": "BaseBdev3", 00:08:48.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.798 "is_configured": false, 00:08:48.798 "data_offset": 0, 00:08:48.798 "data_size": 0 00:08:48.798 } 00:08:48.798 ] 00:08:48.798 }' 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.798 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.367 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.368 [2024-09-28 16:10:03.814658] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.368 [2024-09-28 16:10:03.814698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.368 [2024-09-28 16:10:03.826689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.368 [2024-09-28 16:10:03.828829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.368 [2024-09-28 16:10:03.828871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.368 [2024-09-28 16:10:03.828881] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.368 [2024-09-28 16:10:03.828890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.368 "name": "Existed_Raid", 00:08:49.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.368 "strip_size_kb": 64, 00:08:49.368 "state": "configuring", 00:08:49.368 "raid_level": "raid0", 00:08:49.368 "superblock": false, 00:08:49.368 "num_base_bdevs": 3, 00:08:49.368 "num_base_bdevs_discovered": 1, 00:08:49.368 "num_base_bdevs_operational": 3, 00:08:49.368 "base_bdevs_list": [ 00:08:49.368 { 00:08:49.368 "name": "BaseBdev1", 00:08:49.368 "uuid": "c80051d9-30ce-4708-8181-585dce3cc92e", 00:08:49.368 "is_configured": true, 00:08:49.368 "data_offset": 0, 00:08:49.368 "data_size": 65536 00:08:49.368 }, 00:08:49.368 { 00:08:49.368 "name": "BaseBdev2", 00:08:49.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.368 "is_configured": false, 00:08:49.368 "data_offset": 0, 00:08:49.368 "data_size": 0 00:08:49.368 }, 00:08:49.368 { 00:08:49.368 "name": "BaseBdev3", 00:08:49.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.368 "is_configured": false, 00:08:49.368 "data_offset": 0, 00:08:49.368 "data_size": 0 00:08:49.368 } 00:08:49.368 ] 00:08:49.368 }' 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.368 16:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.628 [2024-09-28 16:10:04.265052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.628 BaseBdev2 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.628 [ 00:08:49.628 { 00:08:49.628 "name": "BaseBdev2", 00:08:49.628 "aliases": [ 00:08:49.628 "8e9eab5a-ae6e-4fd2-9d6e-ccf3bfdf2381" 00:08:49.628 ], 00:08:49.628 "product_name": "Malloc disk", 00:08:49.628 "block_size": 512, 00:08:49.628 "num_blocks": 65536, 00:08:49.628 "uuid": "8e9eab5a-ae6e-4fd2-9d6e-ccf3bfdf2381", 00:08:49.628 "assigned_rate_limits": { 00:08:49.628 "rw_ios_per_sec": 0, 00:08:49.628 "rw_mbytes_per_sec": 0, 00:08:49.628 "r_mbytes_per_sec": 0, 00:08:49.628 "w_mbytes_per_sec": 0 00:08:49.628 }, 00:08:49.628 "claimed": true, 00:08:49.628 "claim_type": "exclusive_write", 00:08:49.628 "zoned": false, 00:08:49.628 "supported_io_types": { 00:08:49.628 "read": true, 00:08:49.628 "write": true, 00:08:49.628 "unmap": true, 00:08:49.628 "flush": true, 00:08:49.628 "reset": true, 00:08:49.628 "nvme_admin": false, 00:08:49.628 "nvme_io": false, 00:08:49.628 "nvme_io_md": false, 00:08:49.628 "write_zeroes": true, 00:08:49.628 "zcopy": true, 00:08:49.628 "get_zone_info": false, 00:08:49.628 "zone_management": false, 00:08:49.628 "zone_append": false, 00:08:49.628 "compare": false, 00:08:49.628 "compare_and_write": false, 00:08:49.628 "abort": true, 00:08:49.628 "seek_hole": false, 00:08:49.628 "seek_data": false, 00:08:49.628 "copy": true, 00:08:49.628 "nvme_iov_md": false 00:08:49.628 }, 00:08:49.628 "memory_domains": [ 00:08:49.628 { 00:08:49.628 "dma_device_id": "system", 00:08:49.628 "dma_device_type": 1 00:08:49.628 }, 00:08:49.628 { 00:08:49.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.628 "dma_device_type": 2 00:08:49.628 } 00:08:49.628 ], 00:08:49.628 "driver_specific": {} 00:08:49.628 } 00:08:49.628 ] 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.628 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.888 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.888 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.888 "name": "Existed_Raid", 00:08:49.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.888 "strip_size_kb": 64, 00:08:49.888 "state": "configuring", 00:08:49.888 "raid_level": "raid0", 00:08:49.888 "superblock": false, 00:08:49.888 "num_base_bdevs": 3, 00:08:49.888 "num_base_bdevs_discovered": 2, 00:08:49.888 "num_base_bdevs_operational": 3, 00:08:49.888 "base_bdevs_list": [ 00:08:49.888 { 00:08:49.888 "name": "BaseBdev1", 00:08:49.888 "uuid": "c80051d9-30ce-4708-8181-585dce3cc92e", 00:08:49.888 "is_configured": true, 00:08:49.888 "data_offset": 0, 00:08:49.888 "data_size": 65536 00:08:49.888 }, 00:08:49.888 { 00:08:49.888 "name": "BaseBdev2", 00:08:49.888 "uuid": "8e9eab5a-ae6e-4fd2-9d6e-ccf3bfdf2381", 00:08:49.888 "is_configured": true, 00:08:49.888 "data_offset": 0, 00:08:49.888 "data_size": 65536 00:08:49.888 }, 00:08:49.888 { 00:08:49.888 "name": "BaseBdev3", 00:08:49.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.888 "is_configured": false, 00:08:49.888 "data_offset": 0, 00:08:49.888 "data_size": 0 00:08:49.888 } 00:08:49.888 ] 00:08:49.888 }' 00:08:49.888 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.888 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.148 [2024-09-28 16:10:04.752252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.148 [2024-09-28 16:10:04.752365] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.148 [2024-09-28 16:10:04.752399] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:50.148 [2024-09-28 16:10:04.752747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:50.148 [2024-09-28 16:10:04.752974] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.148 [2024-09-28 16:10:04.753019] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.148 [2024-09-28 16:10:04.753355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.148 BaseBdev3 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.148 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.148 [ 00:08:50.148 { 00:08:50.148 "name": "BaseBdev3", 00:08:50.148 "aliases": [ 00:08:50.148 "13e0be1d-520f-416c-a343-5b65161c7638" 00:08:50.148 ], 00:08:50.148 "product_name": "Malloc disk", 00:08:50.148 "block_size": 512, 00:08:50.148 "num_blocks": 65536, 00:08:50.148 "uuid": "13e0be1d-520f-416c-a343-5b65161c7638", 00:08:50.148 "assigned_rate_limits": { 00:08:50.148 "rw_ios_per_sec": 0, 00:08:50.148 "rw_mbytes_per_sec": 0, 00:08:50.148 "r_mbytes_per_sec": 0, 00:08:50.148 "w_mbytes_per_sec": 0 00:08:50.148 }, 00:08:50.148 "claimed": true, 00:08:50.148 "claim_type": "exclusive_write", 00:08:50.148 "zoned": false, 00:08:50.148 "supported_io_types": { 00:08:50.148 "read": true, 00:08:50.148 "write": true, 00:08:50.148 "unmap": true, 00:08:50.148 "flush": true, 00:08:50.148 "reset": true, 00:08:50.148 "nvme_admin": false, 00:08:50.148 "nvme_io": false, 00:08:50.148 "nvme_io_md": false, 00:08:50.148 "write_zeroes": true, 00:08:50.148 "zcopy": true, 00:08:50.148 "get_zone_info": false, 00:08:50.148 "zone_management": false, 00:08:50.148 "zone_append": false, 00:08:50.148 "compare": false, 00:08:50.149 "compare_and_write": false, 00:08:50.149 "abort": true, 00:08:50.149 "seek_hole": false, 00:08:50.149 "seek_data": false, 00:08:50.149 "copy": true, 00:08:50.149 "nvme_iov_md": false 00:08:50.149 }, 00:08:50.149 "memory_domains": [ 00:08:50.149 { 00:08:50.149 "dma_device_id": "system", 00:08:50.149 "dma_device_type": 1 00:08:50.149 }, 00:08:50.149 { 00:08:50.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.149 "dma_device_type": 2 00:08:50.149 } 00:08:50.149 ], 00:08:50.149 "driver_specific": {} 00:08:50.149 } 00:08:50.149 ] 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.149 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.409 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.409 "name": "Existed_Raid", 00:08:50.409 "uuid": "54812ee7-0b77-439e-ac75-1dcfe7d45366", 00:08:50.409 "strip_size_kb": 64, 00:08:50.409 "state": "online", 00:08:50.409 "raid_level": "raid0", 00:08:50.409 "superblock": false, 00:08:50.409 "num_base_bdevs": 3, 00:08:50.409 "num_base_bdevs_discovered": 3, 00:08:50.409 "num_base_bdevs_operational": 3, 00:08:50.409 "base_bdevs_list": [ 00:08:50.409 { 00:08:50.409 "name": "BaseBdev1", 00:08:50.409 "uuid": "c80051d9-30ce-4708-8181-585dce3cc92e", 00:08:50.409 "is_configured": true, 00:08:50.409 "data_offset": 0, 00:08:50.409 "data_size": 65536 00:08:50.409 }, 00:08:50.409 { 00:08:50.409 "name": "BaseBdev2", 00:08:50.409 "uuid": "8e9eab5a-ae6e-4fd2-9d6e-ccf3bfdf2381", 00:08:50.409 "is_configured": true, 00:08:50.409 "data_offset": 0, 00:08:50.409 "data_size": 65536 00:08:50.409 }, 00:08:50.409 { 00:08:50.409 "name": "BaseBdev3", 00:08:50.409 "uuid": "13e0be1d-520f-416c-a343-5b65161c7638", 00:08:50.409 "is_configured": true, 00:08:50.409 "data_offset": 0, 00:08:50.409 "data_size": 65536 00:08:50.409 } 00:08:50.409 ] 00:08:50.409 }' 00:08:50.409 16:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.409 16:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.668 [2024-09-28 16:10:05.243711] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.668 "name": "Existed_Raid", 00:08:50.668 "aliases": [ 00:08:50.668 "54812ee7-0b77-439e-ac75-1dcfe7d45366" 00:08:50.668 ], 00:08:50.668 "product_name": "Raid Volume", 00:08:50.668 "block_size": 512, 00:08:50.668 "num_blocks": 196608, 00:08:50.668 "uuid": "54812ee7-0b77-439e-ac75-1dcfe7d45366", 00:08:50.668 "assigned_rate_limits": { 00:08:50.668 "rw_ios_per_sec": 0, 00:08:50.668 "rw_mbytes_per_sec": 0, 00:08:50.668 "r_mbytes_per_sec": 0, 00:08:50.668 "w_mbytes_per_sec": 0 00:08:50.668 }, 00:08:50.668 "claimed": false, 00:08:50.668 "zoned": false, 00:08:50.668 "supported_io_types": { 00:08:50.668 "read": true, 00:08:50.668 "write": true, 00:08:50.668 "unmap": true, 00:08:50.668 "flush": true, 00:08:50.668 "reset": true, 00:08:50.668 "nvme_admin": false, 00:08:50.668 "nvme_io": false, 00:08:50.668 "nvme_io_md": false, 00:08:50.668 "write_zeroes": true, 00:08:50.668 "zcopy": false, 00:08:50.668 "get_zone_info": false, 00:08:50.668 "zone_management": false, 00:08:50.668 "zone_append": false, 00:08:50.668 "compare": false, 00:08:50.668 "compare_and_write": false, 00:08:50.668 "abort": false, 00:08:50.668 "seek_hole": false, 00:08:50.668 "seek_data": false, 00:08:50.668 "copy": false, 00:08:50.668 "nvme_iov_md": false 00:08:50.668 }, 00:08:50.668 "memory_domains": [ 00:08:50.668 { 00:08:50.668 "dma_device_id": "system", 00:08:50.668 "dma_device_type": 1 00:08:50.668 }, 00:08:50.668 { 00:08:50.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.668 "dma_device_type": 2 00:08:50.668 }, 00:08:50.668 { 00:08:50.668 "dma_device_id": "system", 00:08:50.668 "dma_device_type": 1 00:08:50.668 }, 00:08:50.668 { 00:08:50.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.668 "dma_device_type": 2 00:08:50.668 }, 00:08:50.668 { 00:08:50.668 "dma_device_id": "system", 00:08:50.668 "dma_device_type": 1 00:08:50.668 }, 00:08:50.668 { 00:08:50.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.668 "dma_device_type": 2 00:08:50.668 } 00:08:50.668 ], 00:08:50.668 "driver_specific": { 00:08:50.668 "raid": { 00:08:50.668 "uuid": "54812ee7-0b77-439e-ac75-1dcfe7d45366", 00:08:50.668 "strip_size_kb": 64, 00:08:50.668 "state": "online", 00:08:50.668 "raid_level": "raid0", 00:08:50.668 "superblock": false, 00:08:50.668 "num_base_bdevs": 3, 00:08:50.668 "num_base_bdevs_discovered": 3, 00:08:50.668 "num_base_bdevs_operational": 3, 00:08:50.668 "base_bdevs_list": [ 00:08:50.668 { 00:08:50.668 "name": "BaseBdev1", 00:08:50.668 "uuid": "c80051d9-30ce-4708-8181-585dce3cc92e", 00:08:50.668 "is_configured": true, 00:08:50.668 "data_offset": 0, 00:08:50.668 "data_size": 65536 00:08:50.668 }, 00:08:50.668 { 00:08:50.668 "name": "BaseBdev2", 00:08:50.668 "uuid": "8e9eab5a-ae6e-4fd2-9d6e-ccf3bfdf2381", 00:08:50.668 "is_configured": true, 00:08:50.668 "data_offset": 0, 00:08:50.668 "data_size": 65536 00:08:50.668 }, 00:08:50.668 { 00:08:50.668 "name": "BaseBdev3", 00:08:50.668 "uuid": "13e0be1d-520f-416c-a343-5b65161c7638", 00:08:50.668 "is_configured": true, 00:08:50.668 "data_offset": 0, 00:08:50.668 "data_size": 65536 00:08:50.668 } 00:08:50.668 ] 00:08:50.668 } 00:08:50.668 } 00:08:50.668 }' 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.668 BaseBdev2 00:08:50.668 BaseBdev3' 00:08:50.668 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.928 [2024-09-28 16:10:05.510964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.928 [2024-09-28 16:10:05.511032] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.928 [2024-09-28 16:10:05.511095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:50.928 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.188 "name": "Existed_Raid", 00:08:51.188 "uuid": "54812ee7-0b77-439e-ac75-1dcfe7d45366", 00:08:51.188 "strip_size_kb": 64, 00:08:51.188 "state": "offline", 00:08:51.188 "raid_level": "raid0", 00:08:51.188 "superblock": false, 00:08:51.188 "num_base_bdevs": 3, 00:08:51.188 "num_base_bdevs_discovered": 2, 00:08:51.188 "num_base_bdevs_operational": 2, 00:08:51.188 "base_bdevs_list": [ 00:08:51.188 { 00:08:51.188 "name": null, 00:08:51.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.188 "is_configured": false, 00:08:51.188 "data_offset": 0, 00:08:51.188 "data_size": 65536 00:08:51.188 }, 00:08:51.188 { 00:08:51.188 "name": "BaseBdev2", 00:08:51.188 "uuid": "8e9eab5a-ae6e-4fd2-9d6e-ccf3bfdf2381", 00:08:51.188 "is_configured": true, 00:08:51.188 "data_offset": 0, 00:08:51.188 "data_size": 65536 00:08:51.188 }, 00:08:51.188 { 00:08:51.188 "name": "BaseBdev3", 00:08:51.188 "uuid": "13e0be1d-520f-416c-a343-5b65161c7638", 00:08:51.188 "is_configured": true, 00:08:51.188 "data_offset": 0, 00:08:51.188 "data_size": 65536 00:08:51.188 } 00:08:51.188 ] 00:08:51.188 }' 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.188 16:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.448 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.448 [2024-09-28 16:10:06.116688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.708 [2024-09-28 16:10:06.273378] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.708 [2024-09-28 16:10:06.273439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.708 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.968 BaseBdev2 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.968 [ 00:08:51.968 { 00:08:51.968 "name": "BaseBdev2", 00:08:51.968 "aliases": [ 00:08:51.968 "7be9bcba-6027-415e-9669-d237a7e92c3c" 00:08:51.968 ], 00:08:51.968 "product_name": "Malloc disk", 00:08:51.968 "block_size": 512, 00:08:51.968 "num_blocks": 65536, 00:08:51.968 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:51.968 "assigned_rate_limits": { 00:08:51.968 "rw_ios_per_sec": 0, 00:08:51.968 "rw_mbytes_per_sec": 0, 00:08:51.968 "r_mbytes_per_sec": 0, 00:08:51.968 "w_mbytes_per_sec": 0 00:08:51.968 }, 00:08:51.968 "claimed": false, 00:08:51.968 "zoned": false, 00:08:51.968 "supported_io_types": { 00:08:51.968 "read": true, 00:08:51.968 "write": true, 00:08:51.968 "unmap": true, 00:08:51.968 "flush": true, 00:08:51.968 "reset": true, 00:08:51.968 "nvme_admin": false, 00:08:51.968 "nvme_io": false, 00:08:51.968 "nvme_io_md": false, 00:08:51.968 "write_zeroes": true, 00:08:51.968 "zcopy": true, 00:08:51.968 "get_zone_info": false, 00:08:51.968 "zone_management": false, 00:08:51.968 "zone_append": false, 00:08:51.968 "compare": false, 00:08:51.968 "compare_and_write": false, 00:08:51.968 "abort": true, 00:08:51.968 "seek_hole": false, 00:08:51.968 "seek_data": false, 00:08:51.968 "copy": true, 00:08:51.968 "nvme_iov_md": false 00:08:51.968 }, 00:08:51.968 "memory_domains": [ 00:08:51.968 { 00:08:51.968 "dma_device_id": "system", 00:08:51.968 "dma_device_type": 1 00:08:51.968 }, 00:08:51.968 { 00:08:51.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.968 "dma_device_type": 2 00:08:51.968 } 00:08:51.968 ], 00:08:51.968 "driver_specific": {} 00:08:51.968 } 00:08:51.968 ] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.968 BaseBdev3 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.968 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.969 [ 00:08:51.969 { 00:08:51.969 "name": "BaseBdev3", 00:08:51.969 "aliases": [ 00:08:51.969 "1a404030-d410-4ed5-8e8a-e21cb721bd4a" 00:08:51.969 ], 00:08:51.969 "product_name": "Malloc disk", 00:08:51.969 "block_size": 512, 00:08:51.969 "num_blocks": 65536, 00:08:51.969 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:51.969 "assigned_rate_limits": { 00:08:51.969 "rw_ios_per_sec": 0, 00:08:51.969 "rw_mbytes_per_sec": 0, 00:08:51.969 "r_mbytes_per_sec": 0, 00:08:51.969 "w_mbytes_per_sec": 0 00:08:51.969 }, 00:08:51.969 "claimed": false, 00:08:51.969 "zoned": false, 00:08:51.969 "supported_io_types": { 00:08:51.969 "read": true, 00:08:51.969 "write": true, 00:08:51.969 "unmap": true, 00:08:51.969 "flush": true, 00:08:51.969 "reset": true, 00:08:51.969 "nvme_admin": false, 00:08:51.969 "nvme_io": false, 00:08:51.969 "nvme_io_md": false, 00:08:51.969 "write_zeroes": true, 00:08:51.969 "zcopy": true, 00:08:51.969 "get_zone_info": false, 00:08:51.969 "zone_management": false, 00:08:51.969 "zone_append": false, 00:08:51.969 "compare": false, 00:08:51.969 "compare_and_write": false, 00:08:51.969 "abort": true, 00:08:51.969 "seek_hole": false, 00:08:51.969 "seek_data": false, 00:08:51.969 "copy": true, 00:08:51.969 "nvme_iov_md": false 00:08:51.969 }, 00:08:51.969 "memory_domains": [ 00:08:51.969 { 00:08:51.969 "dma_device_id": "system", 00:08:51.969 "dma_device_type": 1 00:08:51.969 }, 00:08:51.969 { 00:08:51.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.969 "dma_device_type": 2 00:08:51.969 } 00:08:51.969 ], 00:08:51.969 "driver_specific": {} 00:08:51.969 } 00:08:51.969 ] 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.969 [2024-09-28 16:10:06.593690] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.969 [2024-09-28 16:10:06.593783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.969 [2024-09-28 16:10:06.593825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.969 [2024-09-28 16:10:06.595935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.969 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.228 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.228 "name": "Existed_Raid", 00:08:52.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.228 "strip_size_kb": 64, 00:08:52.228 "state": "configuring", 00:08:52.228 "raid_level": "raid0", 00:08:52.228 "superblock": false, 00:08:52.228 "num_base_bdevs": 3, 00:08:52.228 "num_base_bdevs_discovered": 2, 00:08:52.228 "num_base_bdevs_operational": 3, 00:08:52.228 "base_bdevs_list": [ 00:08:52.228 { 00:08:52.228 "name": "BaseBdev1", 00:08:52.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.228 "is_configured": false, 00:08:52.228 "data_offset": 0, 00:08:52.228 "data_size": 0 00:08:52.228 }, 00:08:52.228 { 00:08:52.228 "name": "BaseBdev2", 00:08:52.228 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:52.228 "is_configured": true, 00:08:52.228 "data_offset": 0, 00:08:52.228 "data_size": 65536 00:08:52.228 }, 00:08:52.228 { 00:08:52.228 "name": "BaseBdev3", 00:08:52.228 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:52.228 "is_configured": true, 00:08:52.228 "data_offset": 0, 00:08:52.228 "data_size": 65536 00:08:52.228 } 00:08:52.228 ] 00:08:52.228 }' 00:08:52.228 16:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.228 16:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.488 [2024-09-28 16:10:07.072810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.488 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.488 "name": "Existed_Raid", 00:08:52.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.488 "strip_size_kb": 64, 00:08:52.488 "state": "configuring", 00:08:52.489 "raid_level": "raid0", 00:08:52.489 "superblock": false, 00:08:52.489 "num_base_bdevs": 3, 00:08:52.489 "num_base_bdevs_discovered": 1, 00:08:52.489 "num_base_bdevs_operational": 3, 00:08:52.489 "base_bdevs_list": [ 00:08:52.489 { 00:08:52.489 "name": "BaseBdev1", 00:08:52.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.489 "is_configured": false, 00:08:52.489 "data_offset": 0, 00:08:52.489 "data_size": 0 00:08:52.489 }, 00:08:52.489 { 00:08:52.489 "name": null, 00:08:52.489 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:52.489 "is_configured": false, 00:08:52.489 "data_offset": 0, 00:08:52.489 "data_size": 65536 00:08:52.489 }, 00:08:52.489 { 00:08:52.489 "name": "BaseBdev3", 00:08:52.489 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:52.489 "is_configured": true, 00:08:52.489 "data_offset": 0, 00:08:52.489 "data_size": 65536 00:08:52.489 } 00:08:52.489 ] 00:08:52.489 }' 00:08:52.489 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.489 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.058 [2024-09-28 16:10:07.612327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.058 BaseBdev1 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.058 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.058 [ 00:08:53.058 { 00:08:53.058 "name": "BaseBdev1", 00:08:53.058 "aliases": [ 00:08:53.058 "fff37497-76bb-4b02-a042-5dcb82bed1c9" 00:08:53.058 ], 00:08:53.058 "product_name": "Malloc disk", 00:08:53.058 "block_size": 512, 00:08:53.058 "num_blocks": 65536, 00:08:53.058 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:53.058 "assigned_rate_limits": { 00:08:53.058 "rw_ios_per_sec": 0, 00:08:53.058 "rw_mbytes_per_sec": 0, 00:08:53.058 "r_mbytes_per_sec": 0, 00:08:53.058 "w_mbytes_per_sec": 0 00:08:53.058 }, 00:08:53.058 "claimed": true, 00:08:53.058 "claim_type": "exclusive_write", 00:08:53.058 "zoned": false, 00:08:53.058 "supported_io_types": { 00:08:53.058 "read": true, 00:08:53.058 "write": true, 00:08:53.058 "unmap": true, 00:08:53.059 "flush": true, 00:08:53.059 "reset": true, 00:08:53.059 "nvme_admin": false, 00:08:53.059 "nvme_io": false, 00:08:53.059 "nvme_io_md": false, 00:08:53.059 "write_zeroes": true, 00:08:53.059 "zcopy": true, 00:08:53.059 "get_zone_info": false, 00:08:53.059 "zone_management": false, 00:08:53.059 "zone_append": false, 00:08:53.059 "compare": false, 00:08:53.059 "compare_and_write": false, 00:08:53.059 "abort": true, 00:08:53.059 "seek_hole": false, 00:08:53.059 "seek_data": false, 00:08:53.059 "copy": true, 00:08:53.059 "nvme_iov_md": false 00:08:53.059 }, 00:08:53.059 "memory_domains": [ 00:08:53.059 { 00:08:53.059 "dma_device_id": "system", 00:08:53.059 "dma_device_type": 1 00:08:53.059 }, 00:08:53.059 { 00:08:53.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.059 "dma_device_type": 2 00:08:53.059 } 00:08:53.059 ], 00:08:53.059 "driver_specific": {} 00:08:53.059 } 00:08:53.059 ] 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.059 "name": "Existed_Raid", 00:08:53.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.059 "strip_size_kb": 64, 00:08:53.059 "state": "configuring", 00:08:53.059 "raid_level": "raid0", 00:08:53.059 "superblock": false, 00:08:53.059 "num_base_bdevs": 3, 00:08:53.059 "num_base_bdevs_discovered": 2, 00:08:53.059 "num_base_bdevs_operational": 3, 00:08:53.059 "base_bdevs_list": [ 00:08:53.059 { 00:08:53.059 "name": "BaseBdev1", 00:08:53.059 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:53.059 "is_configured": true, 00:08:53.059 "data_offset": 0, 00:08:53.059 "data_size": 65536 00:08:53.059 }, 00:08:53.059 { 00:08:53.059 "name": null, 00:08:53.059 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:53.059 "is_configured": false, 00:08:53.059 "data_offset": 0, 00:08:53.059 "data_size": 65536 00:08:53.059 }, 00:08:53.059 { 00:08:53.059 "name": "BaseBdev3", 00:08:53.059 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:53.059 "is_configured": true, 00:08:53.059 "data_offset": 0, 00:08:53.059 "data_size": 65536 00:08:53.059 } 00:08:53.059 ] 00:08:53.059 }' 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.059 16:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.628 [2024-09-28 16:10:08.163458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.628 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.628 "name": "Existed_Raid", 00:08:53.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.628 "strip_size_kb": 64, 00:08:53.628 "state": "configuring", 00:08:53.628 "raid_level": "raid0", 00:08:53.628 "superblock": false, 00:08:53.628 "num_base_bdevs": 3, 00:08:53.628 "num_base_bdevs_discovered": 1, 00:08:53.628 "num_base_bdevs_operational": 3, 00:08:53.628 "base_bdevs_list": [ 00:08:53.628 { 00:08:53.628 "name": "BaseBdev1", 00:08:53.628 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:53.628 "is_configured": true, 00:08:53.628 "data_offset": 0, 00:08:53.628 "data_size": 65536 00:08:53.628 }, 00:08:53.628 { 00:08:53.628 "name": null, 00:08:53.628 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:53.628 "is_configured": false, 00:08:53.628 "data_offset": 0, 00:08:53.628 "data_size": 65536 00:08:53.629 }, 00:08:53.629 { 00:08:53.629 "name": null, 00:08:53.629 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:53.629 "is_configured": false, 00:08:53.629 "data_offset": 0, 00:08:53.629 "data_size": 65536 00:08:53.629 } 00:08:53.629 ] 00:08:53.629 }' 00:08:53.629 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.629 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.198 [2024-09-28 16:10:08.642658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.198 "name": "Existed_Raid", 00:08:54.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.198 "strip_size_kb": 64, 00:08:54.198 "state": "configuring", 00:08:54.198 "raid_level": "raid0", 00:08:54.198 "superblock": false, 00:08:54.198 "num_base_bdevs": 3, 00:08:54.198 "num_base_bdevs_discovered": 2, 00:08:54.198 "num_base_bdevs_operational": 3, 00:08:54.198 "base_bdevs_list": [ 00:08:54.198 { 00:08:54.198 "name": "BaseBdev1", 00:08:54.198 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:54.198 "is_configured": true, 00:08:54.198 "data_offset": 0, 00:08:54.198 "data_size": 65536 00:08:54.198 }, 00:08:54.198 { 00:08:54.198 "name": null, 00:08:54.198 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:54.198 "is_configured": false, 00:08:54.198 "data_offset": 0, 00:08:54.198 "data_size": 65536 00:08:54.198 }, 00:08:54.198 { 00:08:54.198 "name": "BaseBdev3", 00:08:54.198 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:54.198 "is_configured": true, 00:08:54.198 "data_offset": 0, 00:08:54.198 "data_size": 65536 00:08:54.198 } 00:08:54.198 ] 00:08:54.198 }' 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.198 16:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.457 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.457 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.457 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.457 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:54.457 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.717 [2024-09-28 16:10:09.157849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.717 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.718 "name": "Existed_Raid", 00:08:54.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.718 "strip_size_kb": 64, 00:08:54.718 "state": "configuring", 00:08:54.718 "raid_level": "raid0", 00:08:54.718 "superblock": false, 00:08:54.718 "num_base_bdevs": 3, 00:08:54.718 "num_base_bdevs_discovered": 1, 00:08:54.718 "num_base_bdevs_operational": 3, 00:08:54.718 "base_bdevs_list": [ 00:08:54.718 { 00:08:54.718 "name": null, 00:08:54.718 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:54.718 "is_configured": false, 00:08:54.718 "data_offset": 0, 00:08:54.718 "data_size": 65536 00:08:54.718 }, 00:08:54.718 { 00:08:54.718 "name": null, 00:08:54.718 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:54.718 "is_configured": false, 00:08:54.718 "data_offset": 0, 00:08:54.718 "data_size": 65536 00:08:54.718 }, 00:08:54.718 { 00:08:54.718 "name": "BaseBdev3", 00:08:54.718 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:54.718 "is_configured": true, 00:08:54.718 "data_offset": 0, 00:08:54.718 "data_size": 65536 00:08:54.718 } 00:08:54.718 ] 00:08:54.718 }' 00:08:54.718 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.718 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.977 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.977 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.977 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.977 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.236 [2024-09-28 16:10:09.712360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.236 "name": "Existed_Raid", 00:08:55.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.236 "strip_size_kb": 64, 00:08:55.236 "state": "configuring", 00:08:55.236 "raid_level": "raid0", 00:08:55.236 "superblock": false, 00:08:55.236 "num_base_bdevs": 3, 00:08:55.236 "num_base_bdevs_discovered": 2, 00:08:55.236 "num_base_bdevs_operational": 3, 00:08:55.236 "base_bdevs_list": [ 00:08:55.236 { 00:08:55.236 "name": null, 00:08:55.236 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:55.236 "is_configured": false, 00:08:55.236 "data_offset": 0, 00:08:55.236 "data_size": 65536 00:08:55.236 }, 00:08:55.236 { 00:08:55.236 "name": "BaseBdev2", 00:08:55.236 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:55.236 "is_configured": true, 00:08:55.236 "data_offset": 0, 00:08:55.236 "data_size": 65536 00:08:55.236 }, 00:08:55.236 { 00:08:55.236 "name": "BaseBdev3", 00:08:55.236 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:55.236 "is_configured": true, 00:08:55.236 "data_offset": 0, 00:08:55.236 "data_size": 65536 00:08:55.236 } 00:08:55.236 ] 00:08:55.236 }' 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.236 16:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.495 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:55.495 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.495 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.495 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.495 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fff37497-76bb-4b02-a042-5dcb82bed1c9 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.754 [2024-09-28 16:10:10.295741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:55.754 [2024-09-28 16:10:10.295788] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:55.754 [2024-09-28 16:10:10.295798] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:55.754 [2024-09-28 16:10:10.296078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:55.754 [2024-09-28 16:10:10.296244] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:55.754 [2024-09-28 16:10:10.296254] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:55.754 [2024-09-28 16:10:10.296516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.754 NewBaseBdev 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.754 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.754 [ 00:08:55.754 { 00:08:55.754 "name": "NewBaseBdev", 00:08:55.754 "aliases": [ 00:08:55.754 "fff37497-76bb-4b02-a042-5dcb82bed1c9" 00:08:55.754 ], 00:08:55.754 "product_name": "Malloc disk", 00:08:55.754 "block_size": 512, 00:08:55.755 "num_blocks": 65536, 00:08:55.755 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:55.755 "assigned_rate_limits": { 00:08:55.755 "rw_ios_per_sec": 0, 00:08:55.755 "rw_mbytes_per_sec": 0, 00:08:55.755 "r_mbytes_per_sec": 0, 00:08:55.755 "w_mbytes_per_sec": 0 00:08:55.755 }, 00:08:55.755 "claimed": true, 00:08:55.755 "claim_type": "exclusive_write", 00:08:55.755 "zoned": false, 00:08:55.755 "supported_io_types": { 00:08:55.755 "read": true, 00:08:55.755 "write": true, 00:08:55.755 "unmap": true, 00:08:55.755 "flush": true, 00:08:55.755 "reset": true, 00:08:55.755 "nvme_admin": false, 00:08:55.755 "nvme_io": false, 00:08:55.755 "nvme_io_md": false, 00:08:55.755 "write_zeroes": true, 00:08:55.755 "zcopy": true, 00:08:55.755 "get_zone_info": false, 00:08:55.755 "zone_management": false, 00:08:55.755 "zone_append": false, 00:08:55.755 "compare": false, 00:08:55.755 "compare_and_write": false, 00:08:55.755 "abort": true, 00:08:55.755 "seek_hole": false, 00:08:55.755 "seek_data": false, 00:08:55.755 "copy": true, 00:08:55.755 "nvme_iov_md": false 00:08:55.755 }, 00:08:55.755 "memory_domains": [ 00:08:55.755 { 00:08:55.755 "dma_device_id": "system", 00:08:55.755 "dma_device_type": 1 00:08:55.755 }, 00:08:55.755 { 00:08:55.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.755 "dma_device_type": 2 00:08:55.755 } 00:08:55.755 ], 00:08:55.755 "driver_specific": {} 00:08:55.755 } 00:08:55.755 ] 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.755 "name": "Existed_Raid", 00:08:55.755 "uuid": "f9f9a535-8a79-4b38-bff6-8dab8d0a7acd", 00:08:55.755 "strip_size_kb": 64, 00:08:55.755 "state": "online", 00:08:55.755 "raid_level": "raid0", 00:08:55.755 "superblock": false, 00:08:55.755 "num_base_bdevs": 3, 00:08:55.755 "num_base_bdevs_discovered": 3, 00:08:55.755 "num_base_bdevs_operational": 3, 00:08:55.755 "base_bdevs_list": [ 00:08:55.755 { 00:08:55.755 "name": "NewBaseBdev", 00:08:55.755 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:55.755 "is_configured": true, 00:08:55.755 "data_offset": 0, 00:08:55.755 "data_size": 65536 00:08:55.755 }, 00:08:55.755 { 00:08:55.755 "name": "BaseBdev2", 00:08:55.755 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:55.755 "is_configured": true, 00:08:55.755 "data_offset": 0, 00:08:55.755 "data_size": 65536 00:08:55.755 }, 00:08:55.755 { 00:08:55.755 "name": "BaseBdev3", 00:08:55.755 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:55.755 "is_configured": true, 00:08:55.755 "data_offset": 0, 00:08:55.755 "data_size": 65536 00:08:55.755 } 00:08:55.755 ] 00:08:55.755 }' 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.755 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.324 [2024-09-28 16:10:10.791225] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.324 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.324 "name": "Existed_Raid", 00:08:56.324 "aliases": [ 00:08:56.324 "f9f9a535-8a79-4b38-bff6-8dab8d0a7acd" 00:08:56.324 ], 00:08:56.325 "product_name": "Raid Volume", 00:08:56.325 "block_size": 512, 00:08:56.325 "num_blocks": 196608, 00:08:56.325 "uuid": "f9f9a535-8a79-4b38-bff6-8dab8d0a7acd", 00:08:56.325 "assigned_rate_limits": { 00:08:56.325 "rw_ios_per_sec": 0, 00:08:56.325 "rw_mbytes_per_sec": 0, 00:08:56.325 "r_mbytes_per_sec": 0, 00:08:56.325 "w_mbytes_per_sec": 0 00:08:56.325 }, 00:08:56.325 "claimed": false, 00:08:56.325 "zoned": false, 00:08:56.325 "supported_io_types": { 00:08:56.325 "read": true, 00:08:56.325 "write": true, 00:08:56.325 "unmap": true, 00:08:56.325 "flush": true, 00:08:56.325 "reset": true, 00:08:56.325 "nvme_admin": false, 00:08:56.325 "nvme_io": false, 00:08:56.325 "nvme_io_md": false, 00:08:56.325 "write_zeroes": true, 00:08:56.325 "zcopy": false, 00:08:56.325 "get_zone_info": false, 00:08:56.325 "zone_management": false, 00:08:56.325 "zone_append": false, 00:08:56.325 "compare": false, 00:08:56.325 "compare_and_write": false, 00:08:56.325 "abort": false, 00:08:56.325 "seek_hole": false, 00:08:56.325 "seek_data": false, 00:08:56.325 "copy": false, 00:08:56.325 "nvme_iov_md": false 00:08:56.325 }, 00:08:56.325 "memory_domains": [ 00:08:56.325 { 00:08:56.325 "dma_device_id": "system", 00:08:56.325 "dma_device_type": 1 00:08:56.325 }, 00:08:56.325 { 00:08:56.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.325 "dma_device_type": 2 00:08:56.325 }, 00:08:56.325 { 00:08:56.325 "dma_device_id": "system", 00:08:56.325 "dma_device_type": 1 00:08:56.325 }, 00:08:56.325 { 00:08:56.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.325 "dma_device_type": 2 00:08:56.325 }, 00:08:56.325 { 00:08:56.325 "dma_device_id": "system", 00:08:56.325 "dma_device_type": 1 00:08:56.325 }, 00:08:56.325 { 00:08:56.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.325 "dma_device_type": 2 00:08:56.325 } 00:08:56.325 ], 00:08:56.325 "driver_specific": { 00:08:56.325 "raid": { 00:08:56.325 "uuid": "f9f9a535-8a79-4b38-bff6-8dab8d0a7acd", 00:08:56.325 "strip_size_kb": 64, 00:08:56.325 "state": "online", 00:08:56.325 "raid_level": "raid0", 00:08:56.325 "superblock": false, 00:08:56.325 "num_base_bdevs": 3, 00:08:56.325 "num_base_bdevs_discovered": 3, 00:08:56.325 "num_base_bdevs_operational": 3, 00:08:56.325 "base_bdevs_list": [ 00:08:56.325 { 00:08:56.325 "name": "NewBaseBdev", 00:08:56.325 "uuid": "fff37497-76bb-4b02-a042-5dcb82bed1c9", 00:08:56.325 "is_configured": true, 00:08:56.325 "data_offset": 0, 00:08:56.325 "data_size": 65536 00:08:56.325 }, 00:08:56.325 { 00:08:56.325 "name": "BaseBdev2", 00:08:56.325 "uuid": "7be9bcba-6027-415e-9669-d237a7e92c3c", 00:08:56.325 "is_configured": true, 00:08:56.325 "data_offset": 0, 00:08:56.325 "data_size": 65536 00:08:56.325 }, 00:08:56.325 { 00:08:56.325 "name": "BaseBdev3", 00:08:56.325 "uuid": "1a404030-d410-4ed5-8e8a-e21cb721bd4a", 00:08:56.325 "is_configured": true, 00:08:56.325 "data_offset": 0, 00:08:56.325 "data_size": 65536 00:08:56.325 } 00:08:56.325 ] 00:08:56.325 } 00:08:56.325 } 00:08:56.325 }' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:56.325 BaseBdev2 00:08:56.325 BaseBdev3' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.325 16:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.584 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.584 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.584 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.585 [2024-09-28 16:10:11.066425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.585 [2024-09-28 16:10:11.066492] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.585 [2024-09-28 16:10:11.066569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.585 [2024-09-28 16:10:11.066621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.585 [2024-09-28 16:10:11.066634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63822 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63822 ']' 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63822 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63822 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63822' 00:08:56.585 killing process with pid 63822 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63822 00:08:56.585 [2024-09-28 16:10:11.111410] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.585 16:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63822 00:08:56.844 [2024-09-28 16:10:11.424046] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.223 16:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:58.223 00:08:58.223 real 0m10.860s 00:08:58.223 user 0m16.936s 00:08:58.223 sys 0m2.054s 00:08:58.223 16:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.223 16:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.223 ************************************ 00:08:58.223 END TEST raid_state_function_test 00:08:58.223 ************************************ 00:08:58.223 16:10:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:58.224 16:10:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:58.224 16:10:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.224 16:10:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.224 ************************************ 00:08:58.224 START TEST raid_state_function_test_sb 00:08:58.224 ************************************ 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:58.224 Process raid pid: 64446 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64446 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64446' 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64446 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64446 ']' 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.224 16:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.483 [2024-09-28 16:10:12.926310] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:58.483 [2024-09-28 16:10:12.926504] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.483 [2024-09-28 16:10:13.096409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.742 [2024-09-28 16:10:13.343310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.002 [2024-09-28 16:10:13.578423] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.002 [2024-09-28 16:10:13.578553] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.262 [2024-09-28 16:10:13.745421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.262 [2024-09-28 16:10:13.745478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.262 [2024-09-28 16:10:13.745487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.262 [2024-09-28 16:10:13.745497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.262 [2024-09-28 16:10:13.745503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.262 [2024-09-28 16:10:13.745512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.262 "name": "Existed_Raid", 00:08:59.262 "uuid": "04f1343a-90bc-4185-9c9b-c86571dfcfcd", 00:08:59.262 "strip_size_kb": 64, 00:08:59.262 "state": "configuring", 00:08:59.262 "raid_level": "raid0", 00:08:59.262 "superblock": true, 00:08:59.262 "num_base_bdevs": 3, 00:08:59.262 "num_base_bdevs_discovered": 0, 00:08:59.262 "num_base_bdevs_operational": 3, 00:08:59.262 "base_bdevs_list": [ 00:08:59.262 { 00:08:59.262 "name": "BaseBdev1", 00:08:59.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.262 "is_configured": false, 00:08:59.262 "data_offset": 0, 00:08:59.262 "data_size": 0 00:08:59.262 }, 00:08:59.262 { 00:08:59.262 "name": "BaseBdev2", 00:08:59.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.262 "is_configured": false, 00:08:59.262 "data_offset": 0, 00:08:59.262 "data_size": 0 00:08:59.262 }, 00:08:59.262 { 00:08:59.262 "name": "BaseBdev3", 00:08:59.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.262 "is_configured": false, 00:08:59.262 "data_offset": 0, 00:08:59.262 "data_size": 0 00:08:59.262 } 00:08:59.262 ] 00:08:59.262 }' 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.262 16:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.530 [2024-09-28 16:10:14.176627] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.530 [2024-09-28 16:10:14.176730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.530 [2024-09-28 16:10:14.188641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.530 [2024-09-28 16:10:14.188739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.530 [2024-09-28 16:10:14.188766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.530 [2024-09-28 16:10:14.188789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.530 [2024-09-28 16:10:14.188807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.530 [2024-09-28 16:10:14.188827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.530 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.806 [2024-09-28 16:10:14.249841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.806 BaseBdev1 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.806 [ 00:08:59.806 { 00:08:59.806 "name": "BaseBdev1", 00:08:59.806 "aliases": [ 00:08:59.806 "6a419255-3afc-4d6a-83a8-bd62d5acae96" 00:08:59.806 ], 00:08:59.806 "product_name": "Malloc disk", 00:08:59.806 "block_size": 512, 00:08:59.806 "num_blocks": 65536, 00:08:59.806 "uuid": "6a419255-3afc-4d6a-83a8-bd62d5acae96", 00:08:59.806 "assigned_rate_limits": { 00:08:59.806 "rw_ios_per_sec": 0, 00:08:59.806 "rw_mbytes_per_sec": 0, 00:08:59.806 "r_mbytes_per_sec": 0, 00:08:59.806 "w_mbytes_per_sec": 0 00:08:59.806 }, 00:08:59.806 "claimed": true, 00:08:59.806 "claim_type": "exclusive_write", 00:08:59.806 "zoned": false, 00:08:59.806 "supported_io_types": { 00:08:59.806 "read": true, 00:08:59.806 "write": true, 00:08:59.806 "unmap": true, 00:08:59.806 "flush": true, 00:08:59.806 "reset": true, 00:08:59.806 "nvme_admin": false, 00:08:59.806 "nvme_io": false, 00:08:59.806 "nvme_io_md": false, 00:08:59.806 "write_zeroes": true, 00:08:59.806 "zcopy": true, 00:08:59.806 "get_zone_info": false, 00:08:59.806 "zone_management": false, 00:08:59.806 "zone_append": false, 00:08:59.806 "compare": false, 00:08:59.806 "compare_and_write": false, 00:08:59.806 "abort": true, 00:08:59.806 "seek_hole": false, 00:08:59.806 "seek_data": false, 00:08:59.806 "copy": true, 00:08:59.806 "nvme_iov_md": false 00:08:59.806 }, 00:08:59.806 "memory_domains": [ 00:08:59.806 { 00:08:59.806 "dma_device_id": "system", 00:08:59.806 "dma_device_type": 1 00:08:59.806 }, 00:08:59.806 { 00:08:59.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.806 "dma_device_type": 2 00:08:59.806 } 00:08:59.806 ], 00:08:59.806 "driver_specific": {} 00:08:59.806 } 00:08:59.806 ] 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.806 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.806 "name": "Existed_Raid", 00:08:59.806 "uuid": "c328eae9-69f8-4bdd-8ab6-f45b929ea796", 00:08:59.806 "strip_size_kb": 64, 00:08:59.806 "state": "configuring", 00:08:59.806 "raid_level": "raid0", 00:08:59.806 "superblock": true, 00:08:59.806 "num_base_bdevs": 3, 00:08:59.806 "num_base_bdevs_discovered": 1, 00:08:59.807 "num_base_bdevs_operational": 3, 00:08:59.807 "base_bdevs_list": [ 00:08:59.807 { 00:08:59.807 "name": "BaseBdev1", 00:08:59.807 "uuid": "6a419255-3afc-4d6a-83a8-bd62d5acae96", 00:08:59.807 "is_configured": true, 00:08:59.807 "data_offset": 2048, 00:08:59.807 "data_size": 63488 00:08:59.807 }, 00:08:59.807 { 00:08:59.807 "name": "BaseBdev2", 00:08:59.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.807 "is_configured": false, 00:08:59.807 "data_offset": 0, 00:08:59.807 "data_size": 0 00:08:59.807 }, 00:08:59.807 { 00:08:59.807 "name": "BaseBdev3", 00:08:59.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.807 "is_configured": false, 00:08:59.807 "data_offset": 0, 00:08:59.807 "data_size": 0 00:08:59.807 } 00:08:59.807 ] 00:08:59.807 }' 00:08:59.807 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.807 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.086 [2024-09-28 16:10:14.737026] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.086 [2024-09-28 16:10:14.737072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.086 [2024-09-28 16:10:14.745066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.086 [2024-09-28 16:10:14.747150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.086 [2024-09-28 16:10:14.747193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.086 [2024-09-28 16:10:14.747204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.086 [2024-09-28 16:10:14.747213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.086 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.366 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.366 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.366 "name": "Existed_Raid", 00:09:00.366 "uuid": "afeae80b-ca9b-45ec-98bf-7d6c2c844a07", 00:09:00.366 "strip_size_kb": 64, 00:09:00.366 "state": "configuring", 00:09:00.366 "raid_level": "raid0", 00:09:00.366 "superblock": true, 00:09:00.366 "num_base_bdevs": 3, 00:09:00.366 "num_base_bdevs_discovered": 1, 00:09:00.366 "num_base_bdevs_operational": 3, 00:09:00.366 "base_bdevs_list": [ 00:09:00.366 { 00:09:00.366 "name": "BaseBdev1", 00:09:00.366 "uuid": "6a419255-3afc-4d6a-83a8-bd62d5acae96", 00:09:00.366 "is_configured": true, 00:09:00.366 "data_offset": 2048, 00:09:00.366 "data_size": 63488 00:09:00.366 }, 00:09:00.366 { 00:09:00.366 "name": "BaseBdev2", 00:09:00.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.366 "is_configured": false, 00:09:00.366 "data_offset": 0, 00:09:00.366 "data_size": 0 00:09:00.366 }, 00:09:00.366 { 00:09:00.366 "name": "BaseBdev3", 00:09:00.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.366 "is_configured": false, 00:09:00.366 "data_offset": 0, 00:09:00.366 "data_size": 0 00:09:00.366 } 00:09:00.366 ] 00:09:00.366 }' 00:09:00.366 16:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.366 16:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.633 [2024-09-28 16:10:15.223804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.633 BaseBdev2 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.633 [ 00:09:00.633 { 00:09:00.633 "name": "BaseBdev2", 00:09:00.633 "aliases": [ 00:09:00.633 "190e008a-aeda-4f7b-9669-34fb7bf5a013" 00:09:00.633 ], 00:09:00.633 "product_name": "Malloc disk", 00:09:00.633 "block_size": 512, 00:09:00.633 "num_blocks": 65536, 00:09:00.633 "uuid": "190e008a-aeda-4f7b-9669-34fb7bf5a013", 00:09:00.633 "assigned_rate_limits": { 00:09:00.633 "rw_ios_per_sec": 0, 00:09:00.633 "rw_mbytes_per_sec": 0, 00:09:00.633 "r_mbytes_per_sec": 0, 00:09:00.633 "w_mbytes_per_sec": 0 00:09:00.633 }, 00:09:00.633 "claimed": true, 00:09:00.633 "claim_type": "exclusive_write", 00:09:00.633 "zoned": false, 00:09:00.633 "supported_io_types": { 00:09:00.633 "read": true, 00:09:00.633 "write": true, 00:09:00.633 "unmap": true, 00:09:00.633 "flush": true, 00:09:00.633 "reset": true, 00:09:00.633 "nvme_admin": false, 00:09:00.633 "nvme_io": false, 00:09:00.633 "nvme_io_md": false, 00:09:00.633 "write_zeroes": true, 00:09:00.633 "zcopy": true, 00:09:00.633 "get_zone_info": false, 00:09:00.633 "zone_management": false, 00:09:00.633 "zone_append": false, 00:09:00.633 "compare": false, 00:09:00.633 "compare_and_write": false, 00:09:00.633 "abort": true, 00:09:00.633 "seek_hole": false, 00:09:00.633 "seek_data": false, 00:09:00.633 "copy": true, 00:09:00.633 "nvme_iov_md": false 00:09:00.633 }, 00:09:00.633 "memory_domains": [ 00:09:00.633 { 00:09:00.633 "dma_device_id": "system", 00:09:00.633 "dma_device_type": 1 00:09:00.633 }, 00:09:00.633 { 00:09:00.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.633 "dma_device_type": 2 00:09:00.633 } 00:09:00.633 ], 00:09:00.633 "driver_specific": {} 00:09:00.633 } 00:09:00.633 ] 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.633 "name": "Existed_Raid", 00:09:00.633 "uuid": "afeae80b-ca9b-45ec-98bf-7d6c2c844a07", 00:09:00.633 "strip_size_kb": 64, 00:09:00.633 "state": "configuring", 00:09:00.633 "raid_level": "raid0", 00:09:00.633 "superblock": true, 00:09:00.633 "num_base_bdevs": 3, 00:09:00.633 "num_base_bdevs_discovered": 2, 00:09:00.633 "num_base_bdevs_operational": 3, 00:09:00.633 "base_bdevs_list": [ 00:09:00.633 { 00:09:00.633 "name": "BaseBdev1", 00:09:00.633 "uuid": "6a419255-3afc-4d6a-83a8-bd62d5acae96", 00:09:00.633 "is_configured": true, 00:09:00.633 "data_offset": 2048, 00:09:00.633 "data_size": 63488 00:09:00.633 }, 00:09:00.633 { 00:09:00.633 "name": "BaseBdev2", 00:09:00.633 "uuid": "190e008a-aeda-4f7b-9669-34fb7bf5a013", 00:09:00.633 "is_configured": true, 00:09:00.633 "data_offset": 2048, 00:09:00.633 "data_size": 63488 00:09:00.633 }, 00:09:00.633 { 00:09:00.633 "name": "BaseBdev3", 00:09:00.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.633 "is_configured": false, 00:09:00.633 "data_offset": 0, 00:09:00.633 "data_size": 0 00:09:00.633 } 00:09:00.633 ] 00:09:00.633 }' 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.633 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 [2024-09-28 16:10:15.742094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.202 [2024-09-28 16:10:15.742484] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:01.202 [2024-09-28 16:10:15.742512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.202 [2024-09-28 16:10:15.742803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:01.202 [2024-09-28 16:10:15.742971] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:01.202 [2024-09-28 16:10:15.742981] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:01.202 BaseBdev3 00:09:01.202 [2024-09-28 16:10:15.743144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 [ 00:09:01.202 { 00:09:01.202 "name": "BaseBdev3", 00:09:01.202 "aliases": [ 00:09:01.202 "6115aac0-8d7d-4956-9971-04879fba3a23" 00:09:01.202 ], 00:09:01.202 "product_name": "Malloc disk", 00:09:01.202 "block_size": 512, 00:09:01.202 "num_blocks": 65536, 00:09:01.202 "uuid": "6115aac0-8d7d-4956-9971-04879fba3a23", 00:09:01.202 "assigned_rate_limits": { 00:09:01.202 "rw_ios_per_sec": 0, 00:09:01.202 "rw_mbytes_per_sec": 0, 00:09:01.202 "r_mbytes_per_sec": 0, 00:09:01.202 "w_mbytes_per_sec": 0 00:09:01.202 }, 00:09:01.202 "claimed": true, 00:09:01.202 "claim_type": "exclusive_write", 00:09:01.202 "zoned": false, 00:09:01.202 "supported_io_types": { 00:09:01.202 "read": true, 00:09:01.202 "write": true, 00:09:01.202 "unmap": true, 00:09:01.202 "flush": true, 00:09:01.202 "reset": true, 00:09:01.202 "nvme_admin": false, 00:09:01.202 "nvme_io": false, 00:09:01.202 "nvme_io_md": false, 00:09:01.202 "write_zeroes": true, 00:09:01.202 "zcopy": true, 00:09:01.202 "get_zone_info": false, 00:09:01.202 "zone_management": false, 00:09:01.202 "zone_append": false, 00:09:01.202 "compare": false, 00:09:01.202 "compare_and_write": false, 00:09:01.202 "abort": true, 00:09:01.202 "seek_hole": false, 00:09:01.202 "seek_data": false, 00:09:01.202 "copy": true, 00:09:01.202 "nvme_iov_md": false 00:09:01.202 }, 00:09:01.202 "memory_domains": [ 00:09:01.202 { 00:09:01.202 "dma_device_id": "system", 00:09:01.202 "dma_device_type": 1 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.202 "dma_device_type": 2 00:09:01.202 } 00:09:01.202 ], 00:09:01.202 "driver_specific": {} 00:09:01.202 } 00:09:01.202 ] 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.202 "name": "Existed_Raid", 00:09:01.202 "uuid": "afeae80b-ca9b-45ec-98bf-7d6c2c844a07", 00:09:01.202 "strip_size_kb": 64, 00:09:01.202 "state": "online", 00:09:01.202 "raid_level": "raid0", 00:09:01.202 "superblock": true, 00:09:01.202 "num_base_bdevs": 3, 00:09:01.202 "num_base_bdevs_discovered": 3, 00:09:01.202 "num_base_bdevs_operational": 3, 00:09:01.202 "base_bdevs_list": [ 00:09:01.202 { 00:09:01.202 "name": "BaseBdev1", 00:09:01.202 "uuid": "6a419255-3afc-4d6a-83a8-bd62d5acae96", 00:09:01.202 "is_configured": true, 00:09:01.202 "data_offset": 2048, 00:09:01.202 "data_size": 63488 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "name": "BaseBdev2", 00:09:01.202 "uuid": "190e008a-aeda-4f7b-9669-34fb7bf5a013", 00:09:01.202 "is_configured": true, 00:09:01.202 "data_offset": 2048, 00:09:01.202 "data_size": 63488 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "name": "BaseBdev3", 00:09:01.202 "uuid": "6115aac0-8d7d-4956-9971-04879fba3a23", 00:09:01.202 "is_configured": true, 00:09:01.202 "data_offset": 2048, 00:09:01.202 "data_size": 63488 00:09:01.202 } 00:09:01.202 ] 00:09:01.202 }' 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.202 16:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.770 [2024-09-28 16:10:16.241609] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.770 "name": "Existed_Raid", 00:09:01.770 "aliases": [ 00:09:01.770 "afeae80b-ca9b-45ec-98bf-7d6c2c844a07" 00:09:01.770 ], 00:09:01.770 "product_name": "Raid Volume", 00:09:01.770 "block_size": 512, 00:09:01.770 "num_blocks": 190464, 00:09:01.770 "uuid": "afeae80b-ca9b-45ec-98bf-7d6c2c844a07", 00:09:01.770 "assigned_rate_limits": { 00:09:01.770 "rw_ios_per_sec": 0, 00:09:01.770 "rw_mbytes_per_sec": 0, 00:09:01.770 "r_mbytes_per_sec": 0, 00:09:01.770 "w_mbytes_per_sec": 0 00:09:01.770 }, 00:09:01.770 "claimed": false, 00:09:01.770 "zoned": false, 00:09:01.770 "supported_io_types": { 00:09:01.770 "read": true, 00:09:01.770 "write": true, 00:09:01.770 "unmap": true, 00:09:01.770 "flush": true, 00:09:01.770 "reset": true, 00:09:01.770 "nvme_admin": false, 00:09:01.770 "nvme_io": false, 00:09:01.770 "nvme_io_md": false, 00:09:01.770 "write_zeroes": true, 00:09:01.770 "zcopy": false, 00:09:01.770 "get_zone_info": false, 00:09:01.770 "zone_management": false, 00:09:01.770 "zone_append": false, 00:09:01.770 "compare": false, 00:09:01.770 "compare_and_write": false, 00:09:01.770 "abort": false, 00:09:01.770 "seek_hole": false, 00:09:01.770 "seek_data": false, 00:09:01.770 "copy": false, 00:09:01.770 "nvme_iov_md": false 00:09:01.770 }, 00:09:01.770 "memory_domains": [ 00:09:01.770 { 00:09:01.770 "dma_device_id": "system", 00:09:01.770 "dma_device_type": 1 00:09:01.770 }, 00:09:01.770 { 00:09:01.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.770 "dma_device_type": 2 00:09:01.770 }, 00:09:01.770 { 00:09:01.770 "dma_device_id": "system", 00:09:01.770 "dma_device_type": 1 00:09:01.770 }, 00:09:01.770 { 00:09:01.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.770 "dma_device_type": 2 00:09:01.770 }, 00:09:01.770 { 00:09:01.770 "dma_device_id": "system", 00:09:01.770 "dma_device_type": 1 00:09:01.770 }, 00:09:01.770 { 00:09:01.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.770 "dma_device_type": 2 00:09:01.770 } 00:09:01.770 ], 00:09:01.770 "driver_specific": { 00:09:01.770 "raid": { 00:09:01.770 "uuid": "afeae80b-ca9b-45ec-98bf-7d6c2c844a07", 00:09:01.770 "strip_size_kb": 64, 00:09:01.770 "state": "online", 00:09:01.770 "raid_level": "raid0", 00:09:01.770 "superblock": true, 00:09:01.770 "num_base_bdevs": 3, 00:09:01.770 "num_base_bdevs_discovered": 3, 00:09:01.770 "num_base_bdevs_operational": 3, 00:09:01.770 "base_bdevs_list": [ 00:09:01.770 { 00:09:01.770 "name": "BaseBdev1", 00:09:01.770 "uuid": "6a419255-3afc-4d6a-83a8-bd62d5acae96", 00:09:01.770 "is_configured": true, 00:09:01.770 "data_offset": 2048, 00:09:01.770 "data_size": 63488 00:09:01.770 }, 00:09:01.770 { 00:09:01.770 "name": "BaseBdev2", 00:09:01.770 "uuid": "190e008a-aeda-4f7b-9669-34fb7bf5a013", 00:09:01.770 "is_configured": true, 00:09:01.770 "data_offset": 2048, 00:09:01.770 "data_size": 63488 00:09:01.770 }, 00:09:01.770 { 00:09:01.770 "name": "BaseBdev3", 00:09:01.770 "uuid": "6115aac0-8d7d-4956-9971-04879fba3a23", 00:09:01.770 "is_configured": true, 00:09:01.770 "data_offset": 2048, 00:09:01.770 "data_size": 63488 00:09:01.770 } 00:09:01.770 ] 00:09:01.770 } 00:09:01.770 } 00:09:01.770 }' 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:01.770 BaseBdev2 00:09:01.770 BaseBdev3' 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.770 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.771 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.029 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.029 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.029 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.029 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.030 [2024-09-28 16:10:16.516809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.030 [2024-09-28 16:10:16.516880] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.030 [2024-09-28 16:10:16.516943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.030 "name": "Existed_Raid", 00:09:02.030 "uuid": "afeae80b-ca9b-45ec-98bf-7d6c2c844a07", 00:09:02.030 "strip_size_kb": 64, 00:09:02.030 "state": "offline", 00:09:02.030 "raid_level": "raid0", 00:09:02.030 "superblock": true, 00:09:02.030 "num_base_bdevs": 3, 00:09:02.030 "num_base_bdevs_discovered": 2, 00:09:02.030 "num_base_bdevs_operational": 2, 00:09:02.030 "base_bdevs_list": [ 00:09:02.030 { 00:09:02.030 "name": null, 00:09:02.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.030 "is_configured": false, 00:09:02.030 "data_offset": 0, 00:09:02.030 "data_size": 63488 00:09:02.030 }, 00:09:02.030 { 00:09:02.030 "name": "BaseBdev2", 00:09:02.030 "uuid": "190e008a-aeda-4f7b-9669-34fb7bf5a013", 00:09:02.030 "is_configured": true, 00:09:02.030 "data_offset": 2048, 00:09:02.030 "data_size": 63488 00:09:02.030 }, 00:09:02.030 { 00:09:02.030 "name": "BaseBdev3", 00:09:02.030 "uuid": "6115aac0-8d7d-4956-9971-04879fba3a23", 00:09:02.030 "is_configured": true, 00:09:02.030 "data_offset": 2048, 00:09:02.030 "data_size": 63488 00:09:02.030 } 00:09:02.030 ] 00:09:02.030 }' 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.030 16:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 [2024-09-28 16:10:17.125207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.598 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:02.599 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.599 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.858 [2024-09-28 16:10:17.284278] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.858 [2024-09-28 16:10:17.284339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.858 BaseBdev2 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.858 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.858 [ 00:09:02.858 { 00:09:02.858 "name": "BaseBdev2", 00:09:02.858 "aliases": [ 00:09:02.858 "924a306e-35ee-4946-a661-e0b0a8ab0baa" 00:09:02.858 ], 00:09:02.858 "product_name": "Malloc disk", 00:09:02.858 "block_size": 512, 00:09:02.858 "num_blocks": 65536, 00:09:02.858 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:02.858 "assigned_rate_limits": { 00:09:02.858 "rw_ios_per_sec": 0, 00:09:02.858 "rw_mbytes_per_sec": 0, 00:09:02.858 "r_mbytes_per_sec": 0, 00:09:02.858 "w_mbytes_per_sec": 0 00:09:02.858 }, 00:09:02.858 "claimed": false, 00:09:02.858 "zoned": false, 00:09:02.858 "supported_io_types": { 00:09:02.858 "read": true, 00:09:02.858 "write": true, 00:09:02.858 "unmap": true, 00:09:02.858 "flush": true, 00:09:02.858 "reset": true, 00:09:02.858 "nvme_admin": false, 00:09:02.858 "nvme_io": false, 00:09:02.858 "nvme_io_md": false, 00:09:02.858 "write_zeroes": true, 00:09:02.858 "zcopy": true, 00:09:02.859 "get_zone_info": false, 00:09:02.859 "zone_management": false, 00:09:02.859 "zone_append": false, 00:09:02.859 "compare": false, 00:09:02.859 "compare_and_write": false, 00:09:02.859 "abort": true, 00:09:02.859 "seek_hole": false, 00:09:02.859 "seek_data": false, 00:09:02.859 "copy": true, 00:09:02.859 "nvme_iov_md": false 00:09:02.859 }, 00:09:02.859 "memory_domains": [ 00:09:02.859 { 00:09:02.859 "dma_device_id": "system", 00:09:02.859 "dma_device_type": 1 00:09:02.859 }, 00:09:02.859 { 00:09:02.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.859 "dma_device_type": 2 00:09:02.859 } 00:09:02.859 ], 00:09:02.859 "driver_specific": {} 00:09:02.859 } 00:09:02.859 ] 00:09:02.859 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.859 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:02.859 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.859 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.859 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.859 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.859 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.117 BaseBdev3 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.117 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.118 [ 00:09:03.118 { 00:09:03.118 "name": "BaseBdev3", 00:09:03.118 "aliases": [ 00:09:03.118 "7e5ab499-db9c-4b58-8454-8a90340de44c" 00:09:03.118 ], 00:09:03.118 "product_name": "Malloc disk", 00:09:03.118 "block_size": 512, 00:09:03.118 "num_blocks": 65536, 00:09:03.118 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:03.118 "assigned_rate_limits": { 00:09:03.118 "rw_ios_per_sec": 0, 00:09:03.118 "rw_mbytes_per_sec": 0, 00:09:03.118 "r_mbytes_per_sec": 0, 00:09:03.118 "w_mbytes_per_sec": 0 00:09:03.118 }, 00:09:03.118 "claimed": false, 00:09:03.118 "zoned": false, 00:09:03.118 "supported_io_types": { 00:09:03.118 "read": true, 00:09:03.118 "write": true, 00:09:03.118 "unmap": true, 00:09:03.118 "flush": true, 00:09:03.118 "reset": true, 00:09:03.118 "nvme_admin": false, 00:09:03.118 "nvme_io": false, 00:09:03.118 "nvme_io_md": false, 00:09:03.118 "write_zeroes": true, 00:09:03.118 "zcopy": true, 00:09:03.118 "get_zone_info": false, 00:09:03.118 "zone_management": false, 00:09:03.118 "zone_append": false, 00:09:03.118 "compare": false, 00:09:03.118 "compare_and_write": false, 00:09:03.118 "abort": true, 00:09:03.118 "seek_hole": false, 00:09:03.118 "seek_data": false, 00:09:03.118 "copy": true, 00:09:03.118 "nvme_iov_md": false 00:09:03.118 }, 00:09:03.118 "memory_domains": [ 00:09:03.118 { 00:09:03.118 "dma_device_id": "system", 00:09:03.118 "dma_device_type": 1 00:09:03.118 }, 00:09:03.118 { 00:09:03.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.118 "dma_device_type": 2 00:09:03.118 } 00:09:03.118 ], 00:09:03.118 "driver_specific": {} 00:09:03.118 } 00:09:03.118 ] 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.118 [2024-09-28 16:10:17.614645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.118 [2024-09-28 16:10:17.614748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.118 [2024-09-28 16:10:17.614789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.118 [2024-09-28 16:10:17.616834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.118 "name": "Existed_Raid", 00:09:03.118 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:03.118 "strip_size_kb": 64, 00:09:03.118 "state": "configuring", 00:09:03.118 "raid_level": "raid0", 00:09:03.118 "superblock": true, 00:09:03.118 "num_base_bdevs": 3, 00:09:03.118 "num_base_bdevs_discovered": 2, 00:09:03.118 "num_base_bdevs_operational": 3, 00:09:03.118 "base_bdevs_list": [ 00:09:03.118 { 00:09:03.118 "name": "BaseBdev1", 00:09:03.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.118 "is_configured": false, 00:09:03.118 "data_offset": 0, 00:09:03.118 "data_size": 0 00:09:03.118 }, 00:09:03.118 { 00:09:03.118 "name": "BaseBdev2", 00:09:03.118 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:03.118 "is_configured": true, 00:09:03.118 "data_offset": 2048, 00:09:03.118 "data_size": 63488 00:09:03.118 }, 00:09:03.118 { 00:09:03.118 "name": "BaseBdev3", 00:09:03.118 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:03.118 "is_configured": true, 00:09:03.118 "data_offset": 2048, 00:09:03.118 "data_size": 63488 00:09:03.118 } 00:09:03.118 ] 00:09:03.118 }' 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.118 16:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.378 [2024-09-28 16:10:18.037911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.378 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.637 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.637 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.637 "name": "Existed_Raid", 00:09:03.637 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:03.637 "strip_size_kb": 64, 00:09:03.637 "state": "configuring", 00:09:03.637 "raid_level": "raid0", 00:09:03.637 "superblock": true, 00:09:03.637 "num_base_bdevs": 3, 00:09:03.637 "num_base_bdevs_discovered": 1, 00:09:03.637 "num_base_bdevs_operational": 3, 00:09:03.637 "base_bdevs_list": [ 00:09:03.637 { 00:09:03.637 "name": "BaseBdev1", 00:09:03.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.637 "is_configured": false, 00:09:03.637 "data_offset": 0, 00:09:03.637 "data_size": 0 00:09:03.637 }, 00:09:03.637 { 00:09:03.637 "name": null, 00:09:03.638 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:03.638 "is_configured": false, 00:09:03.638 "data_offset": 0, 00:09:03.638 "data_size": 63488 00:09:03.638 }, 00:09:03.638 { 00:09:03.638 "name": "BaseBdev3", 00:09:03.638 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:03.638 "is_configured": true, 00:09:03.638 "data_offset": 2048, 00:09:03.638 "data_size": 63488 00:09:03.638 } 00:09:03.638 ] 00:09:03.638 }' 00:09:03.638 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.638 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.897 [2024-09-28 16:10:18.542358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.897 BaseBdev1 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.897 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.897 [ 00:09:03.897 { 00:09:03.897 "name": "BaseBdev1", 00:09:03.898 "aliases": [ 00:09:03.898 "5cea5872-7750-4664-aac0-3e8c129e540c" 00:09:03.898 ], 00:09:03.898 "product_name": "Malloc disk", 00:09:03.898 "block_size": 512, 00:09:03.898 "num_blocks": 65536, 00:09:03.898 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:03.898 "assigned_rate_limits": { 00:09:03.898 "rw_ios_per_sec": 0, 00:09:03.898 "rw_mbytes_per_sec": 0, 00:09:03.898 "r_mbytes_per_sec": 0, 00:09:03.898 "w_mbytes_per_sec": 0 00:09:03.898 }, 00:09:03.898 "claimed": true, 00:09:03.898 "claim_type": "exclusive_write", 00:09:03.898 "zoned": false, 00:09:03.898 "supported_io_types": { 00:09:03.898 "read": true, 00:09:03.898 "write": true, 00:09:03.898 "unmap": true, 00:09:03.898 "flush": true, 00:09:03.898 "reset": true, 00:09:03.898 "nvme_admin": false, 00:09:03.898 "nvme_io": false, 00:09:03.898 "nvme_io_md": false, 00:09:03.898 "write_zeroes": true, 00:09:03.898 "zcopy": true, 00:09:03.898 "get_zone_info": false, 00:09:03.898 "zone_management": false, 00:09:03.898 "zone_append": false, 00:09:03.898 "compare": false, 00:09:03.898 "compare_and_write": false, 00:09:03.898 "abort": true, 00:09:03.898 "seek_hole": false, 00:09:03.898 "seek_data": false, 00:09:03.898 "copy": true, 00:09:03.898 "nvme_iov_md": false 00:09:03.898 }, 00:09:03.898 "memory_domains": [ 00:09:03.898 { 00:09:03.898 "dma_device_id": "system", 00:09:03.898 "dma_device_type": 1 00:09:03.898 }, 00:09:03.898 { 00:09:03.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.898 "dma_device_type": 2 00:09:03.898 } 00:09:03.898 ], 00:09:03.898 "driver_specific": {} 00:09:03.898 } 00:09:03.898 ] 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.898 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.157 "name": "Existed_Raid", 00:09:04.157 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:04.157 "strip_size_kb": 64, 00:09:04.157 "state": "configuring", 00:09:04.157 "raid_level": "raid0", 00:09:04.157 "superblock": true, 00:09:04.157 "num_base_bdevs": 3, 00:09:04.157 "num_base_bdevs_discovered": 2, 00:09:04.157 "num_base_bdevs_operational": 3, 00:09:04.157 "base_bdevs_list": [ 00:09:04.157 { 00:09:04.157 "name": "BaseBdev1", 00:09:04.157 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:04.157 "is_configured": true, 00:09:04.157 "data_offset": 2048, 00:09:04.157 "data_size": 63488 00:09:04.157 }, 00:09:04.157 { 00:09:04.157 "name": null, 00:09:04.157 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:04.157 "is_configured": false, 00:09:04.157 "data_offset": 0, 00:09:04.157 "data_size": 63488 00:09:04.157 }, 00:09:04.157 { 00:09:04.157 "name": "BaseBdev3", 00:09:04.157 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:04.157 "is_configured": true, 00:09:04.157 "data_offset": 2048, 00:09:04.157 "data_size": 63488 00:09:04.157 } 00:09:04.157 ] 00:09:04.157 }' 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.157 16:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.416 [2024-09-28 16:10:19.077470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.416 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.675 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.675 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.675 "name": "Existed_Raid", 00:09:04.675 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:04.675 "strip_size_kb": 64, 00:09:04.675 "state": "configuring", 00:09:04.675 "raid_level": "raid0", 00:09:04.675 "superblock": true, 00:09:04.675 "num_base_bdevs": 3, 00:09:04.675 "num_base_bdevs_discovered": 1, 00:09:04.675 "num_base_bdevs_operational": 3, 00:09:04.675 "base_bdevs_list": [ 00:09:04.675 { 00:09:04.675 "name": "BaseBdev1", 00:09:04.675 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:04.675 "is_configured": true, 00:09:04.675 "data_offset": 2048, 00:09:04.675 "data_size": 63488 00:09:04.675 }, 00:09:04.675 { 00:09:04.675 "name": null, 00:09:04.675 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:04.675 "is_configured": false, 00:09:04.675 "data_offset": 0, 00:09:04.675 "data_size": 63488 00:09:04.675 }, 00:09:04.675 { 00:09:04.675 "name": null, 00:09:04.675 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:04.675 "is_configured": false, 00:09:04.675 "data_offset": 0, 00:09:04.675 "data_size": 63488 00:09:04.675 } 00:09:04.675 ] 00:09:04.675 }' 00:09:04.675 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.675 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.935 [2024-09-28 16:10:19.536726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.935 "name": "Existed_Raid", 00:09:04.935 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:04.935 "strip_size_kb": 64, 00:09:04.935 "state": "configuring", 00:09:04.935 "raid_level": "raid0", 00:09:04.935 "superblock": true, 00:09:04.935 "num_base_bdevs": 3, 00:09:04.935 "num_base_bdevs_discovered": 2, 00:09:04.935 "num_base_bdevs_operational": 3, 00:09:04.935 "base_bdevs_list": [ 00:09:04.935 { 00:09:04.935 "name": "BaseBdev1", 00:09:04.935 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:04.935 "is_configured": true, 00:09:04.935 "data_offset": 2048, 00:09:04.935 "data_size": 63488 00:09:04.935 }, 00:09:04.935 { 00:09:04.935 "name": null, 00:09:04.935 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:04.935 "is_configured": false, 00:09:04.935 "data_offset": 0, 00:09:04.935 "data_size": 63488 00:09:04.935 }, 00:09:04.935 { 00:09:04.935 "name": "BaseBdev3", 00:09:04.935 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:04.935 "is_configured": true, 00:09:04.935 "data_offset": 2048, 00:09:04.935 "data_size": 63488 00:09:04.935 } 00:09:04.935 ] 00:09:04.935 }' 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.935 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.503 16:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.503 [2024-09-28 16:10:19.987998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.503 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.504 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.504 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.504 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.504 "name": "Existed_Raid", 00:09:05.504 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:05.504 "strip_size_kb": 64, 00:09:05.504 "state": "configuring", 00:09:05.504 "raid_level": "raid0", 00:09:05.504 "superblock": true, 00:09:05.504 "num_base_bdevs": 3, 00:09:05.504 "num_base_bdevs_discovered": 1, 00:09:05.504 "num_base_bdevs_operational": 3, 00:09:05.504 "base_bdevs_list": [ 00:09:05.504 { 00:09:05.504 "name": null, 00:09:05.504 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:05.504 "is_configured": false, 00:09:05.504 "data_offset": 0, 00:09:05.504 "data_size": 63488 00:09:05.504 }, 00:09:05.504 { 00:09:05.504 "name": null, 00:09:05.504 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:05.504 "is_configured": false, 00:09:05.504 "data_offset": 0, 00:09:05.504 "data_size": 63488 00:09:05.504 }, 00:09:05.504 { 00:09:05.504 "name": "BaseBdev3", 00:09:05.504 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:05.504 "is_configured": true, 00:09:05.504 "data_offset": 2048, 00:09:05.504 "data_size": 63488 00:09:05.504 } 00:09:05.504 ] 00:09:05.504 }' 00:09:05.504 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.504 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.072 [2024-09-28 16:10:20.577438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.072 "name": "Existed_Raid", 00:09:06.072 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:06.072 "strip_size_kb": 64, 00:09:06.072 "state": "configuring", 00:09:06.072 "raid_level": "raid0", 00:09:06.072 "superblock": true, 00:09:06.072 "num_base_bdevs": 3, 00:09:06.072 "num_base_bdevs_discovered": 2, 00:09:06.072 "num_base_bdevs_operational": 3, 00:09:06.072 "base_bdevs_list": [ 00:09:06.072 { 00:09:06.072 "name": null, 00:09:06.072 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:06.072 "is_configured": false, 00:09:06.072 "data_offset": 0, 00:09:06.072 "data_size": 63488 00:09:06.072 }, 00:09:06.072 { 00:09:06.072 "name": "BaseBdev2", 00:09:06.072 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:06.072 "is_configured": true, 00:09:06.072 "data_offset": 2048, 00:09:06.072 "data_size": 63488 00:09:06.072 }, 00:09:06.072 { 00:09:06.072 "name": "BaseBdev3", 00:09:06.072 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:06.072 "is_configured": true, 00:09:06.072 "data_offset": 2048, 00:09:06.072 "data_size": 63488 00:09:06.072 } 00:09:06.072 ] 00:09:06.072 }' 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.072 16:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5cea5872-7750-4664-aac0-3e8c129e540c 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 [2024-09-28 16:10:21.146417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:06.641 [2024-09-28 16:10:21.146633] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:06.641 [2024-09-28 16:10:21.146649] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.641 [2024-09-28 16:10:21.146924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:06.641 [2024-09-28 16:10:21.147059] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:06.641 [2024-09-28 16:10:21.147067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:06.641 [2024-09-28 16:10:21.147219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.641 NewBaseBdev 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 [ 00:09:06.641 { 00:09:06.641 "name": "NewBaseBdev", 00:09:06.641 "aliases": [ 00:09:06.641 "5cea5872-7750-4664-aac0-3e8c129e540c" 00:09:06.641 ], 00:09:06.641 "product_name": "Malloc disk", 00:09:06.641 "block_size": 512, 00:09:06.641 "num_blocks": 65536, 00:09:06.641 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:06.641 "assigned_rate_limits": { 00:09:06.641 "rw_ios_per_sec": 0, 00:09:06.641 "rw_mbytes_per_sec": 0, 00:09:06.641 "r_mbytes_per_sec": 0, 00:09:06.641 "w_mbytes_per_sec": 0 00:09:06.641 }, 00:09:06.641 "claimed": true, 00:09:06.641 "claim_type": "exclusive_write", 00:09:06.641 "zoned": false, 00:09:06.641 "supported_io_types": { 00:09:06.641 "read": true, 00:09:06.641 "write": true, 00:09:06.641 "unmap": true, 00:09:06.641 "flush": true, 00:09:06.641 "reset": true, 00:09:06.641 "nvme_admin": false, 00:09:06.641 "nvme_io": false, 00:09:06.641 "nvme_io_md": false, 00:09:06.641 "write_zeroes": true, 00:09:06.641 "zcopy": true, 00:09:06.641 "get_zone_info": false, 00:09:06.641 "zone_management": false, 00:09:06.641 "zone_append": false, 00:09:06.641 "compare": false, 00:09:06.641 "compare_and_write": false, 00:09:06.641 "abort": true, 00:09:06.641 "seek_hole": false, 00:09:06.641 "seek_data": false, 00:09:06.641 "copy": true, 00:09:06.641 "nvme_iov_md": false 00:09:06.641 }, 00:09:06.641 "memory_domains": [ 00:09:06.641 { 00:09:06.641 "dma_device_id": "system", 00:09:06.641 "dma_device_type": 1 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.641 "dma_device_type": 2 00:09:06.641 } 00:09:06.641 ], 00:09:06.641 "driver_specific": {} 00:09:06.641 } 00:09:06.641 ] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.641 "name": "Existed_Raid", 00:09:06.641 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:06.641 "strip_size_kb": 64, 00:09:06.641 "state": "online", 00:09:06.641 "raid_level": "raid0", 00:09:06.641 "superblock": true, 00:09:06.641 "num_base_bdevs": 3, 00:09:06.641 "num_base_bdevs_discovered": 3, 00:09:06.641 "num_base_bdevs_operational": 3, 00:09:06.641 "base_bdevs_list": [ 00:09:06.641 { 00:09:06.641 "name": "NewBaseBdev", 00:09:06.641 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:06.641 "is_configured": true, 00:09:06.641 "data_offset": 2048, 00:09:06.641 "data_size": 63488 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "name": "BaseBdev2", 00:09:06.641 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:06.641 "is_configured": true, 00:09:06.641 "data_offset": 2048, 00:09:06.641 "data_size": 63488 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "name": "BaseBdev3", 00:09:06.641 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:06.641 "is_configured": true, 00:09:06.641 "data_offset": 2048, 00:09:06.641 "data_size": 63488 00:09:06.641 } 00:09:06.641 ] 00:09:06.641 }' 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.641 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.901 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.901 [2024-09-28 16:10:21.581951] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.161 "name": "Existed_Raid", 00:09:07.161 "aliases": [ 00:09:07.161 "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297" 00:09:07.161 ], 00:09:07.161 "product_name": "Raid Volume", 00:09:07.161 "block_size": 512, 00:09:07.161 "num_blocks": 190464, 00:09:07.161 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:07.161 "assigned_rate_limits": { 00:09:07.161 "rw_ios_per_sec": 0, 00:09:07.161 "rw_mbytes_per_sec": 0, 00:09:07.161 "r_mbytes_per_sec": 0, 00:09:07.161 "w_mbytes_per_sec": 0 00:09:07.161 }, 00:09:07.161 "claimed": false, 00:09:07.161 "zoned": false, 00:09:07.161 "supported_io_types": { 00:09:07.161 "read": true, 00:09:07.161 "write": true, 00:09:07.161 "unmap": true, 00:09:07.161 "flush": true, 00:09:07.161 "reset": true, 00:09:07.161 "nvme_admin": false, 00:09:07.161 "nvme_io": false, 00:09:07.161 "nvme_io_md": false, 00:09:07.161 "write_zeroes": true, 00:09:07.161 "zcopy": false, 00:09:07.161 "get_zone_info": false, 00:09:07.161 "zone_management": false, 00:09:07.161 "zone_append": false, 00:09:07.161 "compare": false, 00:09:07.161 "compare_and_write": false, 00:09:07.161 "abort": false, 00:09:07.161 "seek_hole": false, 00:09:07.161 "seek_data": false, 00:09:07.161 "copy": false, 00:09:07.161 "nvme_iov_md": false 00:09:07.161 }, 00:09:07.161 "memory_domains": [ 00:09:07.161 { 00:09:07.161 "dma_device_id": "system", 00:09:07.161 "dma_device_type": 1 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.161 "dma_device_type": 2 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "dma_device_id": "system", 00:09:07.161 "dma_device_type": 1 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.161 "dma_device_type": 2 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "dma_device_id": "system", 00:09:07.161 "dma_device_type": 1 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.161 "dma_device_type": 2 00:09:07.161 } 00:09:07.161 ], 00:09:07.161 "driver_specific": { 00:09:07.161 "raid": { 00:09:07.161 "uuid": "2783ee0b-9c69-4a9a-b0cc-d4b1aec14297", 00:09:07.161 "strip_size_kb": 64, 00:09:07.161 "state": "online", 00:09:07.161 "raid_level": "raid0", 00:09:07.161 "superblock": true, 00:09:07.161 "num_base_bdevs": 3, 00:09:07.161 "num_base_bdevs_discovered": 3, 00:09:07.161 "num_base_bdevs_operational": 3, 00:09:07.161 "base_bdevs_list": [ 00:09:07.161 { 00:09:07.161 "name": "NewBaseBdev", 00:09:07.161 "uuid": "5cea5872-7750-4664-aac0-3e8c129e540c", 00:09:07.161 "is_configured": true, 00:09:07.161 "data_offset": 2048, 00:09:07.161 "data_size": 63488 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "name": "BaseBdev2", 00:09:07.161 "uuid": "924a306e-35ee-4946-a661-e0b0a8ab0baa", 00:09:07.161 "is_configured": true, 00:09:07.161 "data_offset": 2048, 00:09:07.161 "data_size": 63488 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "name": "BaseBdev3", 00:09:07.161 "uuid": "7e5ab499-db9c-4b58-8454-8a90340de44c", 00:09:07.161 "is_configured": true, 00:09:07.161 "data_offset": 2048, 00:09:07.161 "data_size": 63488 00:09:07.161 } 00:09:07.161 ] 00:09:07.161 } 00:09:07.161 } 00:09:07.161 }' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:07.161 BaseBdev2 00:09:07.161 BaseBdev3' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.161 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.421 [2024-09-28 16:10:21.865159] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.421 [2024-09-28 16:10:21.865183] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.421 [2024-09-28 16:10:21.865262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.421 [2024-09-28 16:10:21.865315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.421 [2024-09-28 16:10:21.865381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64446 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64446 ']' 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64446 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64446 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.421 killing process with pid 64446 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64446' 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64446 00:09:07.421 [2024-09-28 16:10:21.918958] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.421 16:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64446 00:09:07.680 [2024-09-28 16:10:22.230037] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.061 16:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:09.061 00:09:09.061 real 0m10.725s 00:09:09.061 user 0m16.682s 00:09:09.061 sys 0m2.054s 00:09:09.061 16:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.061 ************************************ 00:09:09.061 END TEST raid_state_function_test_sb 00:09:09.061 ************************************ 00:09:09.061 16:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.061 16:10:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:09.061 16:10:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:09.061 16:10:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.061 16:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.061 ************************************ 00:09:09.061 START TEST raid_superblock_test 00:09:09.061 ************************************ 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65073 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65073 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65073 ']' 00:09:09.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.061 16:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.061 [2024-09-28 16:10:23.720715] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:09.061 [2024-09-28 16:10:23.720828] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65073 ] 00:09:09.321 [2024-09-28 16:10:23.886085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.581 [2024-09-28 16:10:24.124909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.840 [2024-09-28 16:10:24.337875] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.841 [2024-09-28 16:10:24.337930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 malloc1 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 [2024-09-28 16:10:24.609961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.101 [2024-09-28 16:10:24.610087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.101 [2024-09-28 16:10:24.610134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:10.101 [2024-09-28 16:10:24.610183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.101 [2024-09-28 16:10:24.612574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.101 [2024-09-28 16:10:24.612645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:10.101 pt1 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 malloc2 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 [2024-09-28 16:10:24.696605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.101 [2024-09-28 16:10:24.696718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.101 [2024-09-28 16:10:24.696748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:10.101 [2024-09-28 16:10:24.696757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.101 [2024-09-28 16:10:24.699168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.101 [2024-09-28 16:10:24.699205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.101 pt2 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 malloc3 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 [2024-09-28 16:10:24.756733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.101 [2024-09-28 16:10:24.756825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.101 [2024-09-28 16:10:24.756881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:10.101 [2024-09-28 16:10:24.756909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.101 [2024-09-28 16:10:24.759274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.101 [2024-09-28 16:10:24.759348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.101 pt3 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:10.101 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.102 [2024-09-28 16:10:24.768794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:10.102 [2024-09-28 16:10:24.770943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.102 [2024-09-28 16:10:24.771065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.102 [2024-09-28 16:10:24.771264] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:10.102 [2024-09-28 16:10:24.771312] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.102 [2024-09-28 16:10:24.771569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.102 [2024-09-28 16:10:24.771769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:10.102 [2024-09-28 16:10:24.771810] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:10.102 [2024-09-28 16:10:24.771993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.102 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.362 "name": "raid_bdev1", 00:09:10.362 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:10.362 "strip_size_kb": 64, 00:09:10.362 "state": "online", 00:09:10.362 "raid_level": "raid0", 00:09:10.362 "superblock": true, 00:09:10.362 "num_base_bdevs": 3, 00:09:10.362 "num_base_bdevs_discovered": 3, 00:09:10.362 "num_base_bdevs_operational": 3, 00:09:10.362 "base_bdevs_list": [ 00:09:10.362 { 00:09:10.362 "name": "pt1", 00:09:10.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.362 "is_configured": true, 00:09:10.362 "data_offset": 2048, 00:09:10.362 "data_size": 63488 00:09:10.362 }, 00:09:10.362 { 00:09:10.362 "name": "pt2", 00:09:10.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.362 "is_configured": true, 00:09:10.362 "data_offset": 2048, 00:09:10.362 "data_size": 63488 00:09:10.362 }, 00:09:10.362 { 00:09:10.362 "name": "pt3", 00:09:10.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.362 "is_configured": true, 00:09:10.362 "data_offset": 2048, 00:09:10.362 "data_size": 63488 00:09:10.362 } 00:09:10.362 ] 00:09:10.362 }' 00:09:10.362 16:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.362 16:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.621 [2024-09-28 16:10:25.232255] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.621 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.621 "name": "raid_bdev1", 00:09:10.621 "aliases": [ 00:09:10.621 "22e959d3-d424-4757-8a83-33c77510a22a" 00:09:10.621 ], 00:09:10.621 "product_name": "Raid Volume", 00:09:10.621 "block_size": 512, 00:09:10.621 "num_blocks": 190464, 00:09:10.621 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:10.621 "assigned_rate_limits": { 00:09:10.621 "rw_ios_per_sec": 0, 00:09:10.621 "rw_mbytes_per_sec": 0, 00:09:10.621 "r_mbytes_per_sec": 0, 00:09:10.621 "w_mbytes_per_sec": 0 00:09:10.621 }, 00:09:10.621 "claimed": false, 00:09:10.621 "zoned": false, 00:09:10.621 "supported_io_types": { 00:09:10.621 "read": true, 00:09:10.621 "write": true, 00:09:10.621 "unmap": true, 00:09:10.621 "flush": true, 00:09:10.621 "reset": true, 00:09:10.621 "nvme_admin": false, 00:09:10.621 "nvme_io": false, 00:09:10.621 "nvme_io_md": false, 00:09:10.621 "write_zeroes": true, 00:09:10.621 "zcopy": false, 00:09:10.621 "get_zone_info": false, 00:09:10.621 "zone_management": false, 00:09:10.621 "zone_append": false, 00:09:10.621 "compare": false, 00:09:10.621 "compare_and_write": false, 00:09:10.621 "abort": false, 00:09:10.621 "seek_hole": false, 00:09:10.621 "seek_data": false, 00:09:10.621 "copy": false, 00:09:10.621 "nvme_iov_md": false 00:09:10.621 }, 00:09:10.621 "memory_domains": [ 00:09:10.621 { 00:09:10.621 "dma_device_id": "system", 00:09:10.621 "dma_device_type": 1 00:09:10.621 }, 00:09:10.621 { 00:09:10.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.621 "dma_device_type": 2 00:09:10.621 }, 00:09:10.621 { 00:09:10.621 "dma_device_id": "system", 00:09:10.621 "dma_device_type": 1 00:09:10.621 }, 00:09:10.621 { 00:09:10.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.621 "dma_device_type": 2 00:09:10.621 }, 00:09:10.621 { 00:09:10.621 "dma_device_id": "system", 00:09:10.621 "dma_device_type": 1 00:09:10.622 }, 00:09:10.622 { 00:09:10.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.622 "dma_device_type": 2 00:09:10.622 } 00:09:10.622 ], 00:09:10.622 "driver_specific": { 00:09:10.622 "raid": { 00:09:10.622 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:10.622 "strip_size_kb": 64, 00:09:10.622 "state": "online", 00:09:10.622 "raid_level": "raid0", 00:09:10.622 "superblock": true, 00:09:10.622 "num_base_bdevs": 3, 00:09:10.622 "num_base_bdevs_discovered": 3, 00:09:10.622 "num_base_bdevs_operational": 3, 00:09:10.622 "base_bdevs_list": [ 00:09:10.622 { 00:09:10.622 "name": "pt1", 00:09:10.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.622 "is_configured": true, 00:09:10.622 "data_offset": 2048, 00:09:10.622 "data_size": 63488 00:09:10.622 }, 00:09:10.622 { 00:09:10.622 "name": "pt2", 00:09:10.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.622 "is_configured": true, 00:09:10.622 "data_offset": 2048, 00:09:10.622 "data_size": 63488 00:09:10.622 }, 00:09:10.622 { 00:09:10.622 "name": "pt3", 00:09:10.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.622 "is_configured": true, 00:09:10.622 "data_offset": 2048, 00:09:10.622 "data_size": 63488 00:09:10.622 } 00:09:10.622 ] 00:09:10.622 } 00:09:10.622 } 00:09:10.622 }' 00:09:10.622 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:10.882 pt2 00:09:10.882 pt3' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.882 [2024-09-28 16:10:25.535658] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.882 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=22e959d3-d424-4757-8a83-33c77510a22a 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 22e959d3-d424-4757-8a83-33c77510a22a ']' 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 [2024-09-28 16:10:25.579338] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.143 [2024-09-28 16:10:25.579364] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.143 [2024-09-28 16:10:25.579423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.143 [2024-09-28 16:10:25.579480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.143 [2024-09-28 16:10:25.579489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 [2024-09-28 16:10:25.731200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:11.143 [2024-09-28 16:10:25.733387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:11.143 [2024-09-28 16:10:25.733490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:11.143 [2024-09-28 16:10:25.733557] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:11.143 [2024-09-28 16:10:25.733656] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:11.143 [2024-09-28 16:10:25.733707] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:11.143 [2024-09-28 16:10:25.733796] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.143 [2024-09-28 16:10:25.733807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:11.143 request: 00:09:11.143 { 00:09:11.143 "name": "raid_bdev1", 00:09:11.143 "raid_level": "raid0", 00:09:11.143 "base_bdevs": [ 00:09:11.143 "malloc1", 00:09:11.143 "malloc2", 00:09:11.143 "malloc3" 00:09:11.143 ], 00:09:11.143 "strip_size_kb": 64, 00:09:11.143 "superblock": false, 00:09:11.143 "method": "bdev_raid_create", 00:09:11.143 "req_id": 1 00:09:11.143 } 00:09:11.143 Got JSON-RPC error response 00:09:11.143 response: 00:09:11.143 { 00:09:11.143 "code": -17, 00:09:11.143 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:11.143 } 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 [2024-09-28 16:10:25.795053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.143 [2024-09-28 16:10:25.795135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.143 [2024-09-28 16:10:25.795186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:11.143 [2024-09-28 16:10:25.795213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.143 [2024-09-28 16:10:25.797579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.143 [2024-09-28 16:10:25.797660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.143 [2024-09-28 16:10:25.797745] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:11.143 [2024-09-28 16:10:25.797818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.143 pt1 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:11.143 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.144 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.404 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.404 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.404 "name": "raid_bdev1", 00:09:11.404 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:11.404 "strip_size_kb": 64, 00:09:11.404 "state": "configuring", 00:09:11.404 "raid_level": "raid0", 00:09:11.404 "superblock": true, 00:09:11.404 "num_base_bdevs": 3, 00:09:11.404 "num_base_bdevs_discovered": 1, 00:09:11.404 "num_base_bdevs_operational": 3, 00:09:11.404 "base_bdevs_list": [ 00:09:11.404 { 00:09:11.404 "name": "pt1", 00:09:11.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.404 "is_configured": true, 00:09:11.404 "data_offset": 2048, 00:09:11.404 "data_size": 63488 00:09:11.404 }, 00:09:11.404 { 00:09:11.404 "name": null, 00:09:11.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.404 "is_configured": false, 00:09:11.404 "data_offset": 2048, 00:09:11.404 "data_size": 63488 00:09:11.404 }, 00:09:11.404 { 00:09:11.404 "name": null, 00:09:11.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.404 "is_configured": false, 00:09:11.404 "data_offset": 2048, 00:09:11.404 "data_size": 63488 00:09:11.404 } 00:09:11.404 ] 00:09:11.404 }' 00:09:11.404 16:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.404 16:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.664 [2024-09-28 16:10:26.234338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.664 [2024-09-28 16:10:26.234392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.664 [2024-09-28 16:10:26.234429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:11.664 [2024-09-28 16:10:26.234438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.664 [2024-09-28 16:10:26.234814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.664 [2024-09-28 16:10:26.234849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.664 [2024-09-28 16:10:26.234917] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:11.664 [2024-09-28 16:10:26.234936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.664 pt2 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.664 [2024-09-28 16:10:26.246364] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.664 "name": "raid_bdev1", 00:09:11.664 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:11.664 "strip_size_kb": 64, 00:09:11.664 "state": "configuring", 00:09:11.664 "raid_level": "raid0", 00:09:11.664 "superblock": true, 00:09:11.664 "num_base_bdevs": 3, 00:09:11.664 "num_base_bdevs_discovered": 1, 00:09:11.664 "num_base_bdevs_operational": 3, 00:09:11.664 "base_bdevs_list": [ 00:09:11.664 { 00:09:11.664 "name": "pt1", 00:09:11.664 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.664 "is_configured": true, 00:09:11.664 "data_offset": 2048, 00:09:11.664 "data_size": 63488 00:09:11.664 }, 00:09:11.664 { 00:09:11.664 "name": null, 00:09:11.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.664 "is_configured": false, 00:09:11.664 "data_offset": 0, 00:09:11.664 "data_size": 63488 00:09:11.664 }, 00:09:11.664 { 00:09:11.664 "name": null, 00:09:11.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.664 "is_configured": false, 00:09:11.664 "data_offset": 2048, 00:09:11.664 "data_size": 63488 00:09:11.664 } 00:09:11.664 ] 00:09:11.664 }' 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.664 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.234 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:12.234 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.234 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.235 [2024-09-28 16:10:26.689546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.235 [2024-09-28 16:10:26.689666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.235 [2024-09-28 16:10:26.689700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:12.235 [2024-09-28 16:10:26.689730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.235 [2024-09-28 16:10:26.690171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.235 [2024-09-28 16:10:26.690238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.235 [2024-09-28 16:10:26.690340] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.235 [2024-09-28 16:10:26.690408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.235 pt2 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.235 [2024-09-28 16:10:26.701543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.235 [2024-09-28 16:10:26.701640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.235 [2024-09-28 16:10:26.701669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:12.235 [2024-09-28 16:10:26.701696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.235 [2024-09-28 16:10:26.702065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.235 [2024-09-28 16:10:26.702124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.235 [2024-09-28 16:10:26.702204] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:12.235 [2024-09-28 16:10:26.702265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.235 [2024-09-28 16:10:26.702420] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:12.235 [2024-09-28 16:10:26.702460] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.235 [2024-09-28 16:10:26.702765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:12.235 [2024-09-28 16:10:26.702955] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:12.235 [2024-09-28 16:10:26.702996] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:12.235 [2024-09-28 16:10:26.703173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.235 pt3 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.235 "name": "raid_bdev1", 00:09:12.235 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:12.235 "strip_size_kb": 64, 00:09:12.235 "state": "online", 00:09:12.235 "raid_level": "raid0", 00:09:12.235 "superblock": true, 00:09:12.235 "num_base_bdevs": 3, 00:09:12.235 "num_base_bdevs_discovered": 3, 00:09:12.235 "num_base_bdevs_operational": 3, 00:09:12.235 "base_bdevs_list": [ 00:09:12.235 { 00:09:12.235 "name": "pt1", 00:09:12.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.235 "is_configured": true, 00:09:12.235 "data_offset": 2048, 00:09:12.235 "data_size": 63488 00:09:12.235 }, 00:09:12.235 { 00:09:12.235 "name": "pt2", 00:09:12.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.235 "is_configured": true, 00:09:12.235 "data_offset": 2048, 00:09:12.235 "data_size": 63488 00:09:12.235 }, 00:09:12.235 { 00:09:12.235 "name": "pt3", 00:09:12.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.235 "is_configured": true, 00:09:12.235 "data_offset": 2048, 00:09:12.235 "data_size": 63488 00:09:12.235 } 00:09:12.235 ] 00:09:12.235 }' 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.235 16:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.494 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.494 [2024-09-28 16:10:27.161019] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.753 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.753 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.753 "name": "raid_bdev1", 00:09:12.753 "aliases": [ 00:09:12.754 "22e959d3-d424-4757-8a83-33c77510a22a" 00:09:12.754 ], 00:09:12.754 "product_name": "Raid Volume", 00:09:12.754 "block_size": 512, 00:09:12.754 "num_blocks": 190464, 00:09:12.754 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:12.754 "assigned_rate_limits": { 00:09:12.754 "rw_ios_per_sec": 0, 00:09:12.754 "rw_mbytes_per_sec": 0, 00:09:12.754 "r_mbytes_per_sec": 0, 00:09:12.754 "w_mbytes_per_sec": 0 00:09:12.754 }, 00:09:12.754 "claimed": false, 00:09:12.754 "zoned": false, 00:09:12.754 "supported_io_types": { 00:09:12.754 "read": true, 00:09:12.754 "write": true, 00:09:12.754 "unmap": true, 00:09:12.754 "flush": true, 00:09:12.754 "reset": true, 00:09:12.754 "nvme_admin": false, 00:09:12.754 "nvme_io": false, 00:09:12.754 "nvme_io_md": false, 00:09:12.754 "write_zeroes": true, 00:09:12.754 "zcopy": false, 00:09:12.754 "get_zone_info": false, 00:09:12.754 "zone_management": false, 00:09:12.754 "zone_append": false, 00:09:12.754 "compare": false, 00:09:12.754 "compare_and_write": false, 00:09:12.754 "abort": false, 00:09:12.754 "seek_hole": false, 00:09:12.754 "seek_data": false, 00:09:12.754 "copy": false, 00:09:12.754 "nvme_iov_md": false 00:09:12.754 }, 00:09:12.754 "memory_domains": [ 00:09:12.754 { 00:09:12.754 "dma_device_id": "system", 00:09:12.754 "dma_device_type": 1 00:09:12.754 }, 00:09:12.754 { 00:09:12.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.754 "dma_device_type": 2 00:09:12.754 }, 00:09:12.754 { 00:09:12.754 "dma_device_id": "system", 00:09:12.754 "dma_device_type": 1 00:09:12.754 }, 00:09:12.754 { 00:09:12.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.754 "dma_device_type": 2 00:09:12.754 }, 00:09:12.754 { 00:09:12.754 "dma_device_id": "system", 00:09:12.754 "dma_device_type": 1 00:09:12.754 }, 00:09:12.754 { 00:09:12.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.754 "dma_device_type": 2 00:09:12.754 } 00:09:12.754 ], 00:09:12.754 "driver_specific": { 00:09:12.754 "raid": { 00:09:12.754 "uuid": "22e959d3-d424-4757-8a83-33c77510a22a", 00:09:12.754 "strip_size_kb": 64, 00:09:12.754 "state": "online", 00:09:12.754 "raid_level": "raid0", 00:09:12.754 "superblock": true, 00:09:12.754 "num_base_bdevs": 3, 00:09:12.754 "num_base_bdevs_discovered": 3, 00:09:12.754 "num_base_bdevs_operational": 3, 00:09:12.754 "base_bdevs_list": [ 00:09:12.754 { 00:09:12.754 "name": "pt1", 00:09:12.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.754 "is_configured": true, 00:09:12.754 "data_offset": 2048, 00:09:12.754 "data_size": 63488 00:09:12.754 }, 00:09:12.754 { 00:09:12.754 "name": "pt2", 00:09:12.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.754 "is_configured": true, 00:09:12.754 "data_offset": 2048, 00:09:12.754 "data_size": 63488 00:09:12.754 }, 00:09:12.754 { 00:09:12.754 "name": "pt3", 00:09:12.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.754 "is_configured": true, 00:09:12.754 "data_offset": 2048, 00:09:12.754 "data_size": 63488 00:09:12.754 } 00:09:12.754 ] 00:09:12.754 } 00:09:12.754 } 00:09:12.754 }' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:12.754 pt2 00:09:12.754 pt3' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.754 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.014 [2024-09-28 16:10:27.464486] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 22e959d3-d424-4757-8a83-33c77510a22a '!=' 22e959d3-d424-4757-8a83-33c77510a22a ']' 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65073 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65073 ']' 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65073 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65073 00:09:13.014 killing process with pid 65073 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65073' 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65073 00:09:13.014 [2024-09-28 16:10:27.541294] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.014 [2024-09-28 16:10:27.541383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.014 [2024-09-28 16:10:27.541440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.014 [2024-09-28 16:10:27.541454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:13.014 16:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65073 00:09:13.273 [2024-09-28 16:10:27.859677] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.656 16:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:14.656 00:09:14.656 real 0m5.550s 00:09:14.656 user 0m7.743s 00:09:14.656 sys 0m1.047s 00:09:14.656 16:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.656 ************************************ 00:09:14.656 END TEST raid_superblock_test 00:09:14.656 ************************************ 00:09:14.656 16:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.656 16:10:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:14.656 16:10:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:14.656 16:10:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.656 16:10:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.656 ************************************ 00:09:14.656 START TEST raid_read_error_test 00:09:14.656 ************************************ 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Pxzq6M2yYm 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65326 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65326 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65326 ']' 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.656 16:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.917 [2024-09-28 16:10:29.361494] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:14.917 [2024-09-28 16:10:29.361711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65326 ] 00:09:14.917 [2024-09-28 16:10:29.529251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.176 [2024-09-28 16:10:29.777957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.442 [2024-09-28 16:10:30.005043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.442 [2024-09-28 16:10:30.005086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 BaseBdev1_malloc 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 true 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 [2024-09-28 16:10:30.225984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:15.710 [2024-09-28 16:10:30.226043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.710 [2024-09-28 16:10:30.226060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:15.710 [2024-09-28 16:10:30.226071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.710 [2024-09-28 16:10:30.228475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.710 [2024-09-28 16:10:30.228515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:15.710 BaseBdev1 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 BaseBdev2_malloc 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 true 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 [2024-09-28 16:10:30.322348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:15.710 [2024-09-28 16:10:30.322403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.710 [2024-09-28 16:10:30.322418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:15.710 [2024-09-28 16:10:30.322429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.710 [2024-09-28 16:10:30.324746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.710 [2024-09-28 16:10:30.324783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:15.710 BaseBdev2 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:15.710 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 BaseBdev3_malloc 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 true 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 [2024-09-28 16:10:30.392627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:15.711 [2024-09-28 16:10:30.392679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.711 [2024-09-28 16:10:30.392696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:15.711 [2024-09-28 16:10:30.392707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.970 [2024-09-28 16:10:30.395088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.970 [2024-09-28 16:10:30.395127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:15.970 BaseBdev3 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.970 [2024-09-28 16:10:30.406085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.970 [2024-09-28 16:10:30.408154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.970 [2024-09-28 16:10:30.408290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.970 [2024-09-28 16:10:30.408514] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.970 [2024-09-28 16:10:30.408526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.970 [2024-09-28 16:10:30.408773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:15.970 [2024-09-28 16:10:30.408927] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.970 [2024-09-28 16:10:30.408939] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:15.970 [2024-09-28 16:10:30.409081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.970 "name": "raid_bdev1", 00:09:15.970 "uuid": "03355f87-27e3-4df6-b783-e877bdcc944b", 00:09:15.970 "strip_size_kb": 64, 00:09:15.970 "state": "online", 00:09:15.970 "raid_level": "raid0", 00:09:15.970 "superblock": true, 00:09:15.970 "num_base_bdevs": 3, 00:09:15.970 "num_base_bdevs_discovered": 3, 00:09:15.970 "num_base_bdevs_operational": 3, 00:09:15.970 "base_bdevs_list": [ 00:09:15.970 { 00:09:15.970 "name": "BaseBdev1", 00:09:15.970 "uuid": "3789a384-68a8-5db3-ae5e-556b1b9b0732", 00:09:15.970 "is_configured": true, 00:09:15.970 "data_offset": 2048, 00:09:15.970 "data_size": 63488 00:09:15.970 }, 00:09:15.970 { 00:09:15.970 "name": "BaseBdev2", 00:09:15.970 "uuid": "f26f0fd7-d72f-55a4-a418-051b0a2a3530", 00:09:15.970 "is_configured": true, 00:09:15.970 "data_offset": 2048, 00:09:15.970 "data_size": 63488 00:09:15.970 }, 00:09:15.970 { 00:09:15.970 "name": "BaseBdev3", 00:09:15.970 "uuid": "2ade68c2-4b32-5e0d-9933-9a8b73e508ad", 00:09:15.970 "is_configured": true, 00:09:15.970 "data_offset": 2048, 00:09:15.970 "data_size": 63488 00:09:15.970 } 00:09:15.970 ] 00:09:15.970 }' 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.970 16:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.229 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:16.230 16:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:16.230 [2024-09-28 16:10:30.906555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.170 16:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.429 16:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.429 16:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.429 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.429 "name": "raid_bdev1", 00:09:17.429 "uuid": "03355f87-27e3-4df6-b783-e877bdcc944b", 00:09:17.429 "strip_size_kb": 64, 00:09:17.429 "state": "online", 00:09:17.429 "raid_level": "raid0", 00:09:17.429 "superblock": true, 00:09:17.429 "num_base_bdevs": 3, 00:09:17.429 "num_base_bdevs_discovered": 3, 00:09:17.429 "num_base_bdevs_operational": 3, 00:09:17.429 "base_bdevs_list": [ 00:09:17.429 { 00:09:17.429 "name": "BaseBdev1", 00:09:17.429 "uuid": "3789a384-68a8-5db3-ae5e-556b1b9b0732", 00:09:17.429 "is_configured": true, 00:09:17.429 "data_offset": 2048, 00:09:17.429 "data_size": 63488 00:09:17.429 }, 00:09:17.429 { 00:09:17.429 "name": "BaseBdev2", 00:09:17.429 "uuid": "f26f0fd7-d72f-55a4-a418-051b0a2a3530", 00:09:17.429 "is_configured": true, 00:09:17.429 "data_offset": 2048, 00:09:17.429 "data_size": 63488 00:09:17.429 }, 00:09:17.429 { 00:09:17.429 "name": "BaseBdev3", 00:09:17.429 "uuid": "2ade68c2-4b32-5e0d-9933-9a8b73e508ad", 00:09:17.429 "is_configured": true, 00:09:17.429 "data_offset": 2048, 00:09:17.429 "data_size": 63488 00:09:17.429 } 00:09:17.429 ] 00:09:17.429 }' 00:09:17.429 16:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.429 16:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.688 [2024-09-28 16:10:32.246901] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.688 [2024-09-28 16:10:32.246943] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.688 [2024-09-28 16:10:32.249496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.688 [2024-09-28 16:10:32.249542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.688 [2024-09-28 16:10:32.249582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.688 [2024-09-28 16:10:32.249592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:17.688 { 00:09:17.688 "results": [ 00:09:17.688 { 00:09:17.688 "job": "raid_bdev1", 00:09:17.688 "core_mask": "0x1", 00:09:17.688 "workload": "randrw", 00:09:17.688 "percentage": 50, 00:09:17.688 "status": "finished", 00:09:17.688 "queue_depth": 1, 00:09:17.688 "io_size": 131072, 00:09:17.688 "runtime": 1.340825, 00:09:17.688 "iops": 14672.309958421121, 00:09:17.688 "mibps": 1834.0387448026402, 00:09:17.688 "io_failed": 1, 00:09:17.688 "io_timeout": 0, 00:09:17.688 "avg_latency_us": 96.01700664055546, 00:09:17.688 "min_latency_us": 24.929257641921396, 00:09:17.688 "max_latency_us": 1345.0620087336245 00:09:17.688 } 00:09:17.688 ], 00:09:17.688 "core_count": 1 00:09:17.688 } 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65326 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65326 ']' 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65326 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65326 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65326' 00:09:17.688 killing process with pid 65326 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65326 00:09:17.688 [2024-09-28 16:10:32.295987] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.688 16:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65326 00:09:17.948 [2024-09-28 16:10:32.534359] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Pxzq6M2yYm 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:19.327 ************************************ 00:09:19.327 END TEST raid_read_error_test 00:09:19.327 ************************************ 00:09:19.327 00:09:19.327 real 0m4.664s 00:09:19.327 user 0m5.268s 00:09:19.327 sys 0m0.693s 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.327 16:10:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.327 16:10:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:19.327 16:10:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:19.327 16:10:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.328 16:10:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.328 ************************************ 00:09:19.328 START TEST raid_write_error_test 00:09:19.328 ************************************ 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.328 16:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:19.328 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.T5vnOhgTx8 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65478 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65478 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65478 ']' 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.587 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.587 [2024-09-28 16:10:34.099359] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:19.587 [2024-09-28 16:10:34.099562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65478 ] 00:09:19.587 [2024-09-28 16:10:34.263346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.847 [2024-09-28 16:10:34.500062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.107 [2024-09-28 16:10:34.727050] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.107 [2024-09-28 16:10:34.727195] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.367 BaseBdev1_malloc 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.367 true 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.367 16:10:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.367 [2024-09-28 16:10:34.998928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:20.367 [2024-09-28 16:10:34.999047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.367 [2024-09-28 16:10:34.999096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:20.367 [2024-09-28 16:10:34.999133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.367 [2024-09-28 16:10:35.001624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.367 [2024-09-28 16:10:35.001697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:20.367 BaseBdev1 00:09:20.367 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.367 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.367 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:20.367 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.367 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.627 BaseBdev2_malloc 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.627 true 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.627 [2024-09-28 16:10:35.082659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:20.627 [2024-09-28 16:10:35.082721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.627 [2024-09-28 16:10:35.082738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:20.627 [2024-09-28 16:10:35.082749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.627 [2024-09-28 16:10:35.085111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.627 [2024-09-28 16:10:35.085215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:20.627 BaseBdev2 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.627 BaseBdev3_malloc 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.627 true 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.627 [2024-09-28 16:10:35.154549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:20.627 [2024-09-28 16:10:35.154642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.627 [2024-09-28 16:10:35.154679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:20.627 [2024-09-28 16:10:35.154690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.627 [2024-09-28 16:10:35.157121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.627 [2024-09-28 16:10:35.157209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:20.627 BaseBdev3 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.627 [2024-09-28 16:10:35.166622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.627 [2024-09-28 16:10:35.168706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.627 [2024-09-28 16:10:35.168786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.627 [2024-09-28 16:10:35.168980] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:20.627 [2024-09-28 16:10:35.168992] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.627 [2024-09-28 16:10:35.169263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:20.627 [2024-09-28 16:10:35.169417] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:20.627 [2024-09-28 16:10:35.169429] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:20.627 [2024-09-28 16:10:35.169583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:20.627 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.628 "name": "raid_bdev1", 00:09:20.628 "uuid": "361b409f-6047-45d8-a51e-024d68ba1e65", 00:09:20.628 "strip_size_kb": 64, 00:09:20.628 "state": "online", 00:09:20.628 "raid_level": "raid0", 00:09:20.628 "superblock": true, 00:09:20.628 "num_base_bdevs": 3, 00:09:20.628 "num_base_bdevs_discovered": 3, 00:09:20.628 "num_base_bdevs_operational": 3, 00:09:20.628 "base_bdevs_list": [ 00:09:20.628 { 00:09:20.628 "name": "BaseBdev1", 00:09:20.628 "uuid": "645f55c7-b8f7-5457-bf01-8b4dbf6cf38c", 00:09:20.628 "is_configured": true, 00:09:20.628 "data_offset": 2048, 00:09:20.628 "data_size": 63488 00:09:20.628 }, 00:09:20.628 { 00:09:20.628 "name": "BaseBdev2", 00:09:20.628 "uuid": "e8c5792b-3cdb-56d6-b3c1-0b8179a85a22", 00:09:20.628 "is_configured": true, 00:09:20.628 "data_offset": 2048, 00:09:20.628 "data_size": 63488 00:09:20.628 }, 00:09:20.628 { 00:09:20.628 "name": "BaseBdev3", 00:09:20.628 "uuid": "5a54740a-5748-5a6f-a666-9aa6cb8fbaff", 00:09:20.628 "is_configured": true, 00:09:20.628 "data_offset": 2048, 00:09:20.628 "data_size": 63488 00:09:20.628 } 00:09:20.628 ] 00:09:20.628 }' 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.628 16:10:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.196 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:21.196 16:10:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:21.196 [2024-09-28 16:10:35.674833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.139 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.139 "name": "raid_bdev1", 00:09:22.139 "uuid": "361b409f-6047-45d8-a51e-024d68ba1e65", 00:09:22.139 "strip_size_kb": 64, 00:09:22.139 "state": "online", 00:09:22.139 "raid_level": "raid0", 00:09:22.139 "superblock": true, 00:09:22.139 "num_base_bdevs": 3, 00:09:22.139 "num_base_bdevs_discovered": 3, 00:09:22.140 "num_base_bdevs_operational": 3, 00:09:22.140 "base_bdevs_list": [ 00:09:22.140 { 00:09:22.140 "name": "BaseBdev1", 00:09:22.140 "uuid": "645f55c7-b8f7-5457-bf01-8b4dbf6cf38c", 00:09:22.140 "is_configured": true, 00:09:22.140 "data_offset": 2048, 00:09:22.140 "data_size": 63488 00:09:22.140 }, 00:09:22.140 { 00:09:22.140 "name": "BaseBdev2", 00:09:22.140 "uuid": "e8c5792b-3cdb-56d6-b3c1-0b8179a85a22", 00:09:22.140 "is_configured": true, 00:09:22.140 "data_offset": 2048, 00:09:22.140 "data_size": 63488 00:09:22.140 }, 00:09:22.140 { 00:09:22.140 "name": "BaseBdev3", 00:09:22.140 "uuid": "5a54740a-5748-5a6f-a666-9aa6cb8fbaff", 00:09:22.140 "is_configured": true, 00:09:22.140 "data_offset": 2048, 00:09:22.140 "data_size": 63488 00:09:22.140 } 00:09:22.140 ] 00:09:22.140 }' 00:09:22.140 16:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.140 16:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.708 [2024-09-28 16:10:37.095698] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.708 [2024-09-28 16:10:37.095808] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.708 [2024-09-28 16:10:37.098428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.708 [2024-09-28 16:10:37.098475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.708 [2024-09-28 16:10:37.098515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.708 [2024-09-28 16:10:37.098526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:22.708 { 00:09:22.708 "results": [ 00:09:22.708 { 00:09:22.708 "job": "raid_bdev1", 00:09:22.708 "core_mask": "0x1", 00:09:22.708 "workload": "randrw", 00:09:22.708 "percentage": 50, 00:09:22.708 "status": "finished", 00:09:22.708 "queue_depth": 1, 00:09:22.708 "io_size": 131072, 00:09:22.708 "runtime": 1.421699, 00:09:22.708 "iops": 14678.212476761959, 00:09:22.708 "mibps": 1834.7765595952449, 00:09:22.708 "io_failed": 1, 00:09:22.708 "io_timeout": 0, 00:09:22.708 "avg_latency_us": 95.95059586721156, 00:09:22.708 "min_latency_us": 21.910917030567685, 00:09:22.708 "max_latency_us": 1387.989519650655 00:09:22.708 } 00:09:22.708 ], 00:09:22.708 "core_count": 1 00:09:22.708 } 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65478 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65478 ']' 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65478 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65478 00:09:22.708 killing process with pid 65478 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65478' 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65478 00:09:22.708 [2024-09-28 16:10:37.132030] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.708 16:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65478 00:09:22.708 [2024-09-28 16:10:37.370708] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.T5vnOhgTx8 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:24.087 ************************************ 00:09:24.087 END TEST raid_write_error_test 00:09:24.087 ************************************ 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:24.087 00:09:24.087 real 0m4.772s 00:09:24.087 user 0m5.488s 00:09:24.087 sys 0m0.673s 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.087 16:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.346 16:10:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:24.347 16:10:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:24.347 16:10:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:24.347 16:10:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.347 16:10:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.347 ************************************ 00:09:24.347 START TEST raid_state_function_test 00:09:24.347 ************************************ 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65616 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:24.347 Process raid pid: 65616 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65616' 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65616 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65616 ']' 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.347 16:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.347 [2024-09-28 16:10:38.932022] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:24.347 [2024-09-28 16:10:38.932201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.606 [2024-09-28 16:10:39.095952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.865 [2024-09-28 16:10:39.332907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.865 [2024-09-28 16:10:39.547765] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.865 [2024-09-28 16:10:39.547809] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.125 [2024-09-28 16:10:39.764686] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.125 [2024-09-28 16:10:39.764745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.125 [2024-09-28 16:10:39.764756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.125 [2024-09-28 16:10:39.764765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.125 [2024-09-28 16:10:39.764771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.125 [2024-09-28 16:10:39.764781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.125 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.384 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.384 "name": "Existed_Raid", 00:09:25.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.384 "strip_size_kb": 64, 00:09:25.384 "state": "configuring", 00:09:25.384 "raid_level": "concat", 00:09:25.384 "superblock": false, 00:09:25.384 "num_base_bdevs": 3, 00:09:25.384 "num_base_bdevs_discovered": 0, 00:09:25.384 "num_base_bdevs_operational": 3, 00:09:25.384 "base_bdevs_list": [ 00:09:25.384 { 00:09:25.384 "name": "BaseBdev1", 00:09:25.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.384 "is_configured": false, 00:09:25.384 "data_offset": 0, 00:09:25.384 "data_size": 0 00:09:25.384 }, 00:09:25.384 { 00:09:25.384 "name": "BaseBdev2", 00:09:25.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.384 "is_configured": false, 00:09:25.384 "data_offset": 0, 00:09:25.384 "data_size": 0 00:09:25.384 }, 00:09:25.384 { 00:09:25.384 "name": "BaseBdev3", 00:09:25.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.384 "is_configured": false, 00:09:25.384 "data_offset": 0, 00:09:25.384 "data_size": 0 00:09:25.384 } 00:09:25.384 ] 00:09:25.384 }' 00:09:25.384 16:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.384 16:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.642 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.643 [2024-09-28 16:10:40.203837] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.643 [2024-09-28 16:10:40.203925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.643 [2024-09-28 16:10:40.215845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.643 [2024-09-28 16:10:40.215941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.643 [2024-09-28 16:10:40.215970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.643 [2024-09-28 16:10:40.215993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.643 [2024-09-28 16:10:40.216011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.643 [2024-09-28 16:10:40.216032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.643 [2024-09-28 16:10:40.306065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.643 BaseBdev1 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.643 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.902 [ 00:09:25.902 { 00:09:25.902 "name": "BaseBdev1", 00:09:25.902 "aliases": [ 00:09:25.902 "7f076202-3864-4999-856c-1797b34ec1d9" 00:09:25.902 ], 00:09:25.902 "product_name": "Malloc disk", 00:09:25.902 "block_size": 512, 00:09:25.902 "num_blocks": 65536, 00:09:25.902 "uuid": "7f076202-3864-4999-856c-1797b34ec1d9", 00:09:25.902 "assigned_rate_limits": { 00:09:25.902 "rw_ios_per_sec": 0, 00:09:25.902 "rw_mbytes_per_sec": 0, 00:09:25.902 "r_mbytes_per_sec": 0, 00:09:25.902 "w_mbytes_per_sec": 0 00:09:25.902 }, 00:09:25.902 "claimed": true, 00:09:25.902 "claim_type": "exclusive_write", 00:09:25.902 "zoned": false, 00:09:25.902 "supported_io_types": { 00:09:25.902 "read": true, 00:09:25.902 "write": true, 00:09:25.902 "unmap": true, 00:09:25.902 "flush": true, 00:09:25.902 "reset": true, 00:09:25.902 "nvme_admin": false, 00:09:25.902 "nvme_io": false, 00:09:25.902 "nvme_io_md": false, 00:09:25.902 "write_zeroes": true, 00:09:25.902 "zcopy": true, 00:09:25.902 "get_zone_info": false, 00:09:25.902 "zone_management": false, 00:09:25.902 "zone_append": false, 00:09:25.902 "compare": false, 00:09:25.902 "compare_and_write": false, 00:09:25.902 "abort": true, 00:09:25.902 "seek_hole": false, 00:09:25.902 "seek_data": false, 00:09:25.902 "copy": true, 00:09:25.902 "nvme_iov_md": false 00:09:25.902 }, 00:09:25.902 "memory_domains": [ 00:09:25.902 { 00:09:25.902 "dma_device_id": "system", 00:09:25.902 "dma_device_type": 1 00:09:25.902 }, 00:09:25.902 { 00:09:25.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.902 "dma_device_type": 2 00:09:25.902 } 00:09:25.902 ], 00:09:25.902 "driver_specific": {} 00:09:25.902 } 00:09:25.902 ] 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.902 "name": "Existed_Raid", 00:09:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.902 "strip_size_kb": 64, 00:09:25.902 "state": "configuring", 00:09:25.902 "raid_level": "concat", 00:09:25.902 "superblock": false, 00:09:25.902 "num_base_bdevs": 3, 00:09:25.902 "num_base_bdevs_discovered": 1, 00:09:25.902 "num_base_bdevs_operational": 3, 00:09:25.902 "base_bdevs_list": [ 00:09:25.902 { 00:09:25.902 "name": "BaseBdev1", 00:09:25.902 "uuid": "7f076202-3864-4999-856c-1797b34ec1d9", 00:09:25.902 "is_configured": true, 00:09:25.902 "data_offset": 0, 00:09:25.902 "data_size": 65536 00:09:25.902 }, 00:09:25.902 { 00:09:25.902 "name": "BaseBdev2", 00:09:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.902 "is_configured": false, 00:09:25.902 "data_offset": 0, 00:09:25.902 "data_size": 0 00:09:25.902 }, 00:09:25.902 { 00:09:25.902 "name": "BaseBdev3", 00:09:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.902 "is_configured": false, 00:09:25.902 "data_offset": 0, 00:09:25.902 "data_size": 0 00:09:25.902 } 00:09:25.902 ] 00:09:25.902 }' 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.902 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.161 [2024-09-28 16:10:40.777301] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.161 [2024-09-28 16:10:40.777393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.161 [2024-09-28 16:10:40.789342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.161 [2024-09-28 16:10:40.791553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.161 [2024-09-28 16:10:40.791594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.161 [2024-09-28 16:10:40.791605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.161 [2024-09-28 16:10:40.791614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.161 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.162 "name": "Existed_Raid", 00:09:26.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.162 "strip_size_kb": 64, 00:09:26.162 "state": "configuring", 00:09:26.162 "raid_level": "concat", 00:09:26.162 "superblock": false, 00:09:26.162 "num_base_bdevs": 3, 00:09:26.162 "num_base_bdevs_discovered": 1, 00:09:26.162 "num_base_bdevs_operational": 3, 00:09:26.162 "base_bdevs_list": [ 00:09:26.162 { 00:09:26.162 "name": "BaseBdev1", 00:09:26.162 "uuid": "7f076202-3864-4999-856c-1797b34ec1d9", 00:09:26.162 "is_configured": true, 00:09:26.162 "data_offset": 0, 00:09:26.162 "data_size": 65536 00:09:26.162 }, 00:09:26.162 { 00:09:26.162 "name": "BaseBdev2", 00:09:26.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.162 "is_configured": false, 00:09:26.162 "data_offset": 0, 00:09:26.162 "data_size": 0 00:09:26.162 }, 00:09:26.162 { 00:09:26.162 "name": "BaseBdev3", 00:09:26.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.162 "is_configured": false, 00:09:26.162 "data_offset": 0, 00:09:26.162 "data_size": 0 00:09:26.162 } 00:09:26.162 ] 00:09:26.162 }' 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.162 16:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 [2024-09-28 16:10:41.264685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.732 BaseBdev2 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 [ 00:09:26.732 { 00:09:26.732 "name": "BaseBdev2", 00:09:26.732 "aliases": [ 00:09:26.732 "8f030ab3-ff61-4791-8961-ed2074717fbf" 00:09:26.732 ], 00:09:26.732 "product_name": "Malloc disk", 00:09:26.732 "block_size": 512, 00:09:26.732 "num_blocks": 65536, 00:09:26.732 "uuid": "8f030ab3-ff61-4791-8961-ed2074717fbf", 00:09:26.732 "assigned_rate_limits": { 00:09:26.732 "rw_ios_per_sec": 0, 00:09:26.732 "rw_mbytes_per_sec": 0, 00:09:26.732 "r_mbytes_per_sec": 0, 00:09:26.732 "w_mbytes_per_sec": 0 00:09:26.732 }, 00:09:26.732 "claimed": true, 00:09:26.732 "claim_type": "exclusive_write", 00:09:26.732 "zoned": false, 00:09:26.732 "supported_io_types": { 00:09:26.732 "read": true, 00:09:26.732 "write": true, 00:09:26.732 "unmap": true, 00:09:26.732 "flush": true, 00:09:26.732 "reset": true, 00:09:26.732 "nvme_admin": false, 00:09:26.732 "nvme_io": false, 00:09:26.732 "nvme_io_md": false, 00:09:26.732 "write_zeroes": true, 00:09:26.732 "zcopy": true, 00:09:26.732 "get_zone_info": false, 00:09:26.732 "zone_management": false, 00:09:26.732 "zone_append": false, 00:09:26.732 "compare": false, 00:09:26.732 "compare_and_write": false, 00:09:26.732 "abort": true, 00:09:26.732 "seek_hole": false, 00:09:26.732 "seek_data": false, 00:09:26.732 "copy": true, 00:09:26.732 "nvme_iov_md": false 00:09:26.732 }, 00:09:26.732 "memory_domains": [ 00:09:26.732 { 00:09:26.732 "dma_device_id": "system", 00:09:26.732 "dma_device_type": 1 00:09:26.732 }, 00:09:26.732 { 00:09:26.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.732 "dma_device_type": 2 00:09:26.732 } 00:09:26.732 ], 00:09:26.732 "driver_specific": {} 00:09:26.732 } 00:09:26.732 ] 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.732 "name": "Existed_Raid", 00:09:26.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.732 "strip_size_kb": 64, 00:09:26.732 "state": "configuring", 00:09:26.732 "raid_level": "concat", 00:09:26.732 "superblock": false, 00:09:26.732 "num_base_bdevs": 3, 00:09:26.732 "num_base_bdevs_discovered": 2, 00:09:26.732 "num_base_bdevs_operational": 3, 00:09:26.732 "base_bdevs_list": [ 00:09:26.732 { 00:09:26.732 "name": "BaseBdev1", 00:09:26.732 "uuid": "7f076202-3864-4999-856c-1797b34ec1d9", 00:09:26.732 "is_configured": true, 00:09:26.732 "data_offset": 0, 00:09:26.732 "data_size": 65536 00:09:26.732 }, 00:09:26.732 { 00:09:26.732 "name": "BaseBdev2", 00:09:26.732 "uuid": "8f030ab3-ff61-4791-8961-ed2074717fbf", 00:09:26.732 "is_configured": true, 00:09:26.732 "data_offset": 0, 00:09:26.732 "data_size": 65536 00:09:26.732 }, 00:09:26.732 { 00:09:26.732 "name": "BaseBdev3", 00:09:26.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.732 "is_configured": false, 00:09:26.732 "data_offset": 0, 00:09:26.732 "data_size": 0 00:09:26.732 } 00:09:26.732 ] 00:09:26.732 }' 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.732 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 [2024-09-28 16:10:41.801646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.301 [2024-09-28 16:10:41.801695] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.301 [2024-09-28 16:10:41.801710] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:27.301 [2024-09-28 16:10:41.801989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:27.301 [2024-09-28 16:10:41.802190] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.301 [2024-09-28 16:10:41.802201] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:27.301 [2024-09-28 16:10:41.802490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.301 BaseBdev3 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 [ 00:09:27.301 { 00:09:27.301 "name": "BaseBdev3", 00:09:27.301 "aliases": [ 00:09:27.301 "1a0ace43-abfc-4c91-813f-fd86ebb1f169" 00:09:27.301 ], 00:09:27.301 "product_name": "Malloc disk", 00:09:27.301 "block_size": 512, 00:09:27.301 "num_blocks": 65536, 00:09:27.301 "uuid": "1a0ace43-abfc-4c91-813f-fd86ebb1f169", 00:09:27.301 "assigned_rate_limits": { 00:09:27.301 "rw_ios_per_sec": 0, 00:09:27.301 "rw_mbytes_per_sec": 0, 00:09:27.301 "r_mbytes_per_sec": 0, 00:09:27.301 "w_mbytes_per_sec": 0 00:09:27.301 }, 00:09:27.301 "claimed": true, 00:09:27.301 "claim_type": "exclusive_write", 00:09:27.301 "zoned": false, 00:09:27.301 "supported_io_types": { 00:09:27.301 "read": true, 00:09:27.301 "write": true, 00:09:27.301 "unmap": true, 00:09:27.301 "flush": true, 00:09:27.301 "reset": true, 00:09:27.301 "nvme_admin": false, 00:09:27.301 "nvme_io": false, 00:09:27.301 "nvme_io_md": false, 00:09:27.301 "write_zeroes": true, 00:09:27.301 "zcopy": true, 00:09:27.301 "get_zone_info": false, 00:09:27.301 "zone_management": false, 00:09:27.301 "zone_append": false, 00:09:27.301 "compare": false, 00:09:27.301 "compare_and_write": false, 00:09:27.301 "abort": true, 00:09:27.301 "seek_hole": false, 00:09:27.301 "seek_data": false, 00:09:27.301 "copy": true, 00:09:27.301 "nvme_iov_md": false 00:09:27.301 }, 00:09:27.301 "memory_domains": [ 00:09:27.301 { 00:09:27.301 "dma_device_id": "system", 00:09:27.301 "dma_device_type": 1 00:09:27.301 }, 00:09:27.301 { 00:09:27.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.301 "dma_device_type": 2 00:09:27.301 } 00:09:27.301 ], 00:09:27.301 "driver_specific": {} 00:09:27.301 } 00:09:27.301 ] 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.301 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.301 "name": "Existed_Raid", 00:09:27.301 "uuid": "8308541b-9ff3-47d3-8fca-4eb3fc6dfad7", 00:09:27.301 "strip_size_kb": 64, 00:09:27.301 "state": "online", 00:09:27.301 "raid_level": "concat", 00:09:27.301 "superblock": false, 00:09:27.301 "num_base_bdevs": 3, 00:09:27.301 "num_base_bdevs_discovered": 3, 00:09:27.302 "num_base_bdevs_operational": 3, 00:09:27.302 "base_bdevs_list": [ 00:09:27.302 { 00:09:27.302 "name": "BaseBdev1", 00:09:27.302 "uuid": "7f076202-3864-4999-856c-1797b34ec1d9", 00:09:27.302 "is_configured": true, 00:09:27.302 "data_offset": 0, 00:09:27.302 "data_size": 65536 00:09:27.302 }, 00:09:27.302 { 00:09:27.302 "name": "BaseBdev2", 00:09:27.302 "uuid": "8f030ab3-ff61-4791-8961-ed2074717fbf", 00:09:27.302 "is_configured": true, 00:09:27.302 "data_offset": 0, 00:09:27.302 "data_size": 65536 00:09:27.302 }, 00:09:27.302 { 00:09:27.302 "name": "BaseBdev3", 00:09:27.302 "uuid": "1a0ace43-abfc-4c91-813f-fd86ebb1f169", 00:09:27.302 "is_configured": true, 00:09:27.302 "data_offset": 0, 00:09:27.302 "data_size": 65536 00:09:27.302 } 00:09:27.302 ] 00:09:27.302 }' 00:09:27.302 16:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.302 16:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.561 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.561 [2024-09-28 16:10:42.245244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.822 "name": "Existed_Raid", 00:09:27.822 "aliases": [ 00:09:27.822 "8308541b-9ff3-47d3-8fca-4eb3fc6dfad7" 00:09:27.822 ], 00:09:27.822 "product_name": "Raid Volume", 00:09:27.822 "block_size": 512, 00:09:27.822 "num_blocks": 196608, 00:09:27.822 "uuid": "8308541b-9ff3-47d3-8fca-4eb3fc6dfad7", 00:09:27.822 "assigned_rate_limits": { 00:09:27.822 "rw_ios_per_sec": 0, 00:09:27.822 "rw_mbytes_per_sec": 0, 00:09:27.822 "r_mbytes_per_sec": 0, 00:09:27.822 "w_mbytes_per_sec": 0 00:09:27.822 }, 00:09:27.822 "claimed": false, 00:09:27.822 "zoned": false, 00:09:27.822 "supported_io_types": { 00:09:27.822 "read": true, 00:09:27.822 "write": true, 00:09:27.822 "unmap": true, 00:09:27.822 "flush": true, 00:09:27.822 "reset": true, 00:09:27.822 "nvme_admin": false, 00:09:27.822 "nvme_io": false, 00:09:27.822 "nvme_io_md": false, 00:09:27.822 "write_zeroes": true, 00:09:27.822 "zcopy": false, 00:09:27.822 "get_zone_info": false, 00:09:27.822 "zone_management": false, 00:09:27.822 "zone_append": false, 00:09:27.822 "compare": false, 00:09:27.822 "compare_and_write": false, 00:09:27.822 "abort": false, 00:09:27.822 "seek_hole": false, 00:09:27.822 "seek_data": false, 00:09:27.822 "copy": false, 00:09:27.822 "nvme_iov_md": false 00:09:27.822 }, 00:09:27.822 "memory_domains": [ 00:09:27.822 { 00:09:27.822 "dma_device_id": "system", 00:09:27.822 "dma_device_type": 1 00:09:27.822 }, 00:09:27.822 { 00:09:27.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.822 "dma_device_type": 2 00:09:27.822 }, 00:09:27.822 { 00:09:27.822 "dma_device_id": "system", 00:09:27.822 "dma_device_type": 1 00:09:27.822 }, 00:09:27.822 { 00:09:27.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.822 "dma_device_type": 2 00:09:27.822 }, 00:09:27.822 { 00:09:27.822 "dma_device_id": "system", 00:09:27.822 "dma_device_type": 1 00:09:27.822 }, 00:09:27.822 { 00:09:27.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.822 "dma_device_type": 2 00:09:27.822 } 00:09:27.822 ], 00:09:27.822 "driver_specific": { 00:09:27.822 "raid": { 00:09:27.822 "uuid": "8308541b-9ff3-47d3-8fca-4eb3fc6dfad7", 00:09:27.822 "strip_size_kb": 64, 00:09:27.822 "state": "online", 00:09:27.822 "raid_level": "concat", 00:09:27.822 "superblock": false, 00:09:27.822 "num_base_bdevs": 3, 00:09:27.822 "num_base_bdevs_discovered": 3, 00:09:27.822 "num_base_bdevs_operational": 3, 00:09:27.822 "base_bdevs_list": [ 00:09:27.822 { 00:09:27.822 "name": "BaseBdev1", 00:09:27.822 "uuid": "7f076202-3864-4999-856c-1797b34ec1d9", 00:09:27.822 "is_configured": true, 00:09:27.822 "data_offset": 0, 00:09:27.822 "data_size": 65536 00:09:27.822 }, 00:09:27.822 { 00:09:27.822 "name": "BaseBdev2", 00:09:27.822 "uuid": "8f030ab3-ff61-4791-8961-ed2074717fbf", 00:09:27.822 "is_configured": true, 00:09:27.822 "data_offset": 0, 00:09:27.822 "data_size": 65536 00:09:27.822 }, 00:09:27.822 { 00:09:27.822 "name": "BaseBdev3", 00:09:27.822 "uuid": "1a0ace43-abfc-4c91-813f-fd86ebb1f169", 00:09:27.822 "is_configured": true, 00:09:27.822 "data_offset": 0, 00:09:27.822 "data_size": 65536 00:09:27.822 } 00:09:27.822 ] 00:09:27.822 } 00:09:27.822 } 00:09:27.822 }' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:27.822 BaseBdev2 00:09:27.822 BaseBdev3' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.822 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.822 [2024-09-28 16:10:42.480541] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.822 [2024-09-28 16:10:42.480608] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.822 [2024-09-28 16:10:42.480704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.082 "name": "Existed_Raid", 00:09:28.082 "uuid": "8308541b-9ff3-47d3-8fca-4eb3fc6dfad7", 00:09:28.082 "strip_size_kb": 64, 00:09:28.082 "state": "offline", 00:09:28.082 "raid_level": "concat", 00:09:28.082 "superblock": false, 00:09:28.082 "num_base_bdevs": 3, 00:09:28.082 "num_base_bdevs_discovered": 2, 00:09:28.082 "num_base_bdevs_operational": 2, 00:09:28.082 "base_bdevs_list": [ 00:09:28.082 { 00:09:28.082 "name": null, 00:09:28.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.082 "is_configured": false, 00:09:28.082 "data_offset": 0, 00:09:28.082 "data_size": 65536 00:09:28.082 }, 00:09:28.082 { 00:09:28.082 "name": "BaseBdev2", 00:09:28.082 "uuid": "8f030ab3-ff61-4791-8961-ed2074717fbf", 00:09:28.082 "is_configured": true, 00:09:28.082 "data_offset": 0, 00:09:28.082 "data_size": 65536 00:09:28.082 }, 00:09:28.082 { 00:09:28.082 "name": "BaseBdev3", 00:09:28.082 "uuid": "1a0ace43-abfc-4c91-813f-fd86ebb1f169", 00:09:28.082 "is_configured": true, 00:09:28.082 "data_offset": 0, 00:09:28.082 "data_size": 65536 00:09:28.082 } 00:09:28.082 ] 00:09:28.082 }' 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.082 16:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.652 [2024-09-28 16:10:43.083611] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.652 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.652 [2024-09-28 16:10:43.246422] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.652 [2024-09-28 16:10:43.246486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.913 BaseBdev2 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.913 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.913 [ 00:09:28.913 { 00:09:28.913 "name": "BaseBdev2", 00:09:28.913 "aliases": [ 00:09:28.913 "e99092c6-8aa7-47b1-9fa1-08b12fe702bc" 00:09:28.913 ], 00:09:28.913 "product_name": "Malloc disk", 00:09:28.913 "block_size": 512, 00:09:28.913 "num_blocks": 65536, 00:09:28.913 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:28.913 "assigned_rate_limits": { 00:09:28.913 "rw_ios_per_sec": 0, 00:09:28.913 "rw_mbytes_per_sec": 0, 00:09:28.913 "r_mbytes_per_sec": 0, 00:09:28.913 "w_mbytes_per_sec": 0 00:09:28.913 }, 00:09:28.913 "claimed": false, 00:09:28.914 "zoned": false, 00:09:28.914 "supported_io_types": { 00:09:28.914 "read": true, 00:09:28.914 "write": true, 00:09:28.914 "unmap": true, 00:09:28.914 "flush": true, 00:09:28.914 "reset": true, 00:09:28.914 "nvme_admin": false, 00:09:28.914 "nvme_io": false, 00:09:28.914 "nvme_io_md": false, 00:09:28.914 "write_zeroes": true, 00:09:28.914 "zcopy": true, 00:09:28.914 "get_zone_info": false, 00:09:28.914 "zone_management": false, 00:09:28.914 "zone_append": false, 00:09:28.914 "compare": false, 00:09:28.914 "compare_and_write": false, 00:09:28.914 "abort": true, 00:09:28.914 "seek_hole": false, 00:09:28.914 "seek_data": false, 00:09:28.914 "copy": true, 00:09:28.914 "nvme_iov_md": false 00:09:28.914 }, 00:09:28.914 "memory_domains": [ 00:09:28.914 { 00:09:28.914 "dma_device_id": "system", 00:09:28.914 "dma_device_type": 1 00:09:28.914 }, 00:09:28.914 { 00:09:28.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.914 "dma_device_type": 2 00:09:28.914 } 00:09:28.914 ], 00:09:28.914 "driver_specific": {} 00:09:28.914 } 00:09:28.914 ] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.914 BaseBdev3 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.914 [ 00:09:28.914 { 00:09:28.914 "name": "BaseBdev3", 00:09:28.914 "aliases": [ 00:09:28.914 "245b0ba8-d67c-4e6a-a01e-6fae9326acd9" 00:09:28.914 ], 00:09:28.914 "product_name": "Malloc disk", 00:09:28.914 "block_size": 512, 00:09:28.914 "num_blocks": 65536, 00:09:28.914 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:28.914 "assigned_rate_limits": { 00:09:28.914 "rw_ios_per_sec": 0, 00:09:28.914 "rw_mbytes_per_sec": 0, 00:09:28.914 "r_mbytes_per_sec": 0, 00:09:28.914 "w_mbytes_per_sec": 0 00:09:28.914 }, 00:09:28.914 "claimed": false, 00:09:28.914 "zoned": false, 00:09:28.914 "supported_io_types": { 00:09:28.914 "read": true, 00:09:28.914 "write": true, 00:09:28.914 "unmap": true, 00:09:28.914 "flush": true, 00:09:28.914 "reset": true, 00:09:28.914 "nvme_admin": false, 00:09:28.914 "nvme_io": false, 00:09:28.914 "nvme_io_md": false, 00:09:28.914 "write_zeroes": true, 00:09:28.914 "zcopy": true, 00:09:28.914 "get_zone_info": false, 00:09:28.914 "zone_management": false, 00:09:28.914 "zone_append": false, 00:09:28.914 "compare": false, 00:09:28.914 "compare_and_write": false, 00:09:28.914 "abort": true, 00:09:28.914 "seek_hole": false, 00:09:28.914 "seek_data": false, 00:09:28.914 "copy": true, 00:09:28.914 "nvme_iov_md": false 00:09:28.914 }, 00:09:28.914 "memory_domains": [ 00:09:28.914 { 00:09:28.914 "dma_device_id": "system", 00:09:28.914 "dma_device_type": 1 00:09:28.914 }, 00:09:28.914 { 00:09:28.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.914 "dma_device_type": 2 00:09:28.914 } 00:09:28.914 ], 00:09:28.914 "driver_specific": {} 00:09:28.914 } 00:09:28.914 ] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.914 [2024-09-28 16:10:43.570629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.914 [2024-09-28 16:10:43.570722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.914 [2024-09-28 16:10:43.570781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.914 [2024-09-28 16:10:43.572815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.914 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.173 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.173 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.173 "name": "Existed_Raid", 00:09:29.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.173 "strip_size_kb": 64, 00:09:29.173 "state": "configuring", 00:09:29.173 "raid_level": "concat", 00:09:29.173 "superblock": false, 00:09:29.173 "num_base_bdevs": 3, 00:09:29.173 "num_base_bdevs_discovered": 2, 00:09:29.173 "num_base_bdevs_operational": 3, 00:09:29.173 "base_bdevs_list": [ 00:09:29.173 { 00:09:29.173 "name": "BaseBdev1", 00:09:29.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.173 "is_configured": false, 00:09:29.173 "data_offset": 0, 00:09:29.173 "data_size": 0 00:09:29.173 }, 00:09:29.173 { 00:09:29.173 "name": "BaseBdev2", 00:09:29.173 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:29.173 "is_configured": true, 00:09:29.173 "data_offset": 0, 00:09:29.173 "data_size": 65536 00:09:29.173 }, 00:09:29.173 { 00:09:29.173 "name": "BaseBdev3", 00:09:29.173 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:29.173 "is_configured": true, 00:09:29.173 "data_offset": 0, 00:09:29.173 "data_size": 65536 00:09:29.173 } 00:09:29.173 ] 00:09:29.173 }' 00:09:29.173 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.173 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.434 16:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:29.434 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.434 16:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.434 [2024-09-28 16:10:44.001844] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.434 "name": "Existed_Raid", 00:09:29.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.434 "strip_size_kb": 64, 00:09:29.434 "state": "configuring", 00:09:29.434 "raid_level": "concat", 00:09:29.434 "superblock": false, 00:09:29.434 "num_base_bdevs": 3, 00:09:29.434 "num_base_bdevs_discovered": 1, 00:09:29.434 "num_base_bdevs_operational": 3, 00:09:29.434 "base_bdevs_list": [ 00:09:29.434 { 00:09:29.434 "name": "BaseBdev1", 00:09:29.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.434 "is_configured": false, 00:09:29.434 "data_offset": 0, 00:09:29.434 "data_size": 0 00:09:29.434 }, 00:09:29.434 { 00:09:29.434 "name": null, 00:09:29.434 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:29.434 "is_configured": false, 00:09:29.434 "data_offset": 0, 00:09:29.434 "data_size": 65536 00:09:29.434 }, 00:09:29.434 { 00:09:29.434 "name": "BaseBdev3", 00:09:29.434 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:29.434 "is_configured": true, 00:09:29.434 "data_offset": 0, 00:09:29.434 "data_size": 65536 00:09:29.434 } 00:09:29.434 ] 00:09:29.434 }' 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.434 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.003 [2024-09-28 16:10:44.538386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.003 BaseBdev1 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.003 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.003 [ 00:09:30.003 { 00:09:30.003 "name": "BaseBdev1", 00:09:30.003 "aliases": [ 00:09:30.003 "15ecf4da-10ca-47e4-8cdf-3a6133540fa1" 00:09:30.003 ], 00:09:30.003 "product_name": "Malloc disk", 00:09:30.003 "block_size": 512, 00:09:30.003 "num_blocks": 65536, 00:09:30.003 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:30.003 "assigned_rate_limits": { 00:09:30.003 "rw_ios_per_sec": 0, 00:09:30.003 "rw_mbytes_per_sec": 0, 00:09:30.003 "r_mbytes_per_sec": 0, 00:09:30.003 "w_mbytes_per_sec": 0 00:09:30.003 }, 00:09:30.003 "claimed": true, 00:09:30.003 "claim_type": "exclusive_write", 00:09:30.003 "zoned": false, 00:09:30.003 "supported_io_types": { 00:09:30.003 "read": true, 00:09:30.003 "write": true, 00:09:30.003 "unmap": true, 00:09:30.003 "flush": true, 00:09:30.004 "reset": true, 00:09:30.004 "nvme_admin": false, 00:09:30.004 "nvme_io": false, 00:09:30.004 "nvme_io_md": false, 00:09:30.004 "write_zeroes": true, 00:09:30.004 "zcopy": true, 00:09:30.004 "get_zone_info": false, 00:09:30.004 "zone_management": false, 00:09:30.004 "zone_append": false, 00:09:30.004 "compare": false, 00:09:30.004 "compare_and_write": false, 00:09:30.004 "abort": true, 00:09:30.004 "seek_hole": false, 00:09:30.004 "seek_data": false, 00:09:30.004 "copy": true, 00:09:30.004 "nvme_iov_md": false 00:09:30.004 }, 00:09:30.004 "memory_domains": [ 00:09:30.004 { 00:09:30.004 "dma_device_id": "system", 00:09:30.004 "dma_device_type": 1 00:09:30.004 }, 00:09:30.004 { 00:09:30.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.004 "dma_device_type": 2 00:09:30.004 } 00:09:30.004 ], 00:09:30.004 "driver_specific": {} 00:09:30.004 } 00:09:30.004 ] 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.004 "name": "Existed_Raid", 00:09:30.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.004 "strip_size_kb": 64, 00:09:30.004 "state": "configuring", 00:09:30.004 "raid_level": "concat", 00:09:30.004 "superblock": false, 00:09:30.004 "num_base_bdevs": 3, 00:09:30.004 "num_base_bdevs_discovered": 2, 00:09:30.004 "num_base_bdevs_operational": 3, 00:09:30.004 "base_bdevs_list": [ 00:09:30.004 { 00:09:30.004 "name": "BaseBdev1", 00:09:30.004 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:30.004 "is_configured": true, 00:09:30.004 "data_offset": 0, 00:09:30.004 "data_size": 65536 00:09:30.004 }, 00:09:30.004 { 00:09:30.004 "name": null, 00:09:30.004 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:30.004 "is_configured": false, 00:09:30.004 "data_offset": 0, 00:09:30.004 "data_size": 65536 00:09:30.004 }, 00:09:30.004 { 00:09:30.004 "name": "BaseBdev3", 00:09:30.004 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:30.004 "is_configured": true, 00:09:30.004 "data_offset": 0, 00:09:30.004 "data_size": 65536 00:09:30.004 } 00:09:30.004 ] 00:09:30.004 }' 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.004 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.574 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.574 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.574 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.574 16:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.574 16:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.574 [2024-09-28 16:10:45.013603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.574 "name": "Existed_Raid", 00:09:30.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.574 "strip_size_kb": 64, 00:09:30.574 "state": "configuring", 00:09:30.574 "raid_level": "concat", 00:09:30.574 "superblock": false, 00:09:30.574 "num_base_bdevs": 3, 00:09:30.574 "num_base_bdevs_discovered": 1, 00:09:30.574 "num_base_bdevs_operational": 3, 00:09:30.574 "base_bdevs_list": [ 00:09:30.574 { 00:09:30.574 "name": "BaseBdev1", 00:09:30.574 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:30.574 "is_configured": true, 00:09:30.574 "data_offset": 0, 00:09:30.574 "data_size": 65536 00:09:30.574 }, 00:09:30.574 { 00:09:30.574 "name": null, 00:09:30.574 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:30.574 "is_configured": false, 00:09:30.574 "data_offset": 0, 00:09:30.574 "data_size": 65536 00:09:30.574 }, 00:09:30.574 { 00:09:30.574 "name": null, 00:09:30.574 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:30.574 "is_configured": false, 00:09:30.574 "data_offset": 0, 00:09:30.574 "data_size": 65536 00:09:30.574 } 00:09:30.574 ] 00:09:30.574 }' 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.574 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.835 [2024-09-28 16:10:45.484795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.835 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.167 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.167 "name": "Existed_Raid", 00:09:31.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.167 "strip_size_kb": 64, 00:09:31.167 "state": "configuring", 00:09:31.167 "raid_level": "concat", 00:09:31.167 "superblock": false, 00:09:31.167 "num_base_bdevs": 3, 00:09:31.167 "num_base_bdevs_discovered": 2, 00:09:31.167 "num_base_bdevs_operational": 3, 00:09:31.167 "base_bdevs_list": [ 00:09:31.167 { 00:09:31.167 "name": "BaseBdev1", 00:09:31.167 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:31.167 "is_configured": true, 00:09:31.167 "data_offset": 0, 00:09:31.167 "data_size": 65536 00:09:31.167 }, 00:09:31.167 { 00:09:31.167 "name": null, 00:09:31.167 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:31.167 "is_configured": false, 00:09:31.167 "data_offset": 0, 00:09:31.167 "data_size": 65536 00:09:31.167 }, 00:09:31.167 { 00:09:31.167 "name": "BaseBdev3", 00:09:31.167 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:31.167 "is_configured": true, 00:09:31.167 "data_offset": 0, 00:09:31.167 "data_size": 65536 00:09:31.167 } 00:09:31.167 ] 00:09:31.167 }' 00:09:31.167 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.167 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.458 16:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.458 [2024-09-28 16:10:45.972089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.458 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.459 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.459 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.459 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.459 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.459 "name": "Existed_Raid", 00:09:31.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.459 "strip_size_kb": 64, 00:09:31.459 "state": "configuring", 00:09:31.459 "raid_level": "concat", 00:09:31.459 "superblock": false, 00:09:31.459 "num_base_bdevs": 3, 00:09:31.459 "num_base_bdevs_discovered": 1, 00:09:31.459 "num_base_bdevs_operational": 3, 00:09:31.459 "base_bdevs_list": [ 00:09:31.459 { 00:09:31.459 "name": null, 00:09:31.459 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:31.459 "is_configured": false, 00:09:31.459 "data_offset": 0, 00:09:31.459 "data_size": 65536 00:09:31.459 }, 00:09:31.459 { 00:09:31.459 "name": null, 00:09:31.459 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:31.459 "is_configured": false, 00:09:31.459 "data_offset": 0, 00:09:31.459 "data_size": 65536 00:09:31.459 }, 00:09:31.459 { 00:09:31.459 "name": "BaseBdev3", 00:09:31.459 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:31.459 "is_configured": true, 00:09:31.459 "data_offset": 0, 00:09:31.459 "data_size": 65536 00:09:31.459 } 00:09:31.459 ] 00:09:31.459 }' 00:09:31.459 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.459 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 [2024-09-28 16:10:46.552634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.029 "name": "Existed_Raid", 00:09:32.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.029 "strip_size_kb": 64, 00:09:32.029 "state": "configuring", 00:09:32.029 "raid_level": "concat", 00:09:32.029 "superblock": false, 00:09:32.029 "num_base_bdevs": 3, 00:09:32.029 "num_base_bdevs_discovered": 2, 00:09:32.029 "num_base_bdevs_operational": 3, 00:09:32.029 "base_bdevs_list": [ 00:09:32.029 { 00:09:32.029 "name": null, 00:09:32.029 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:32.029 "is_configured": false, 00:09:32.029 "data_offset": 0, 00:09:32.029 "data_size": 65536 00:09:32.029 }, 00:09:32.029 { 00:09:32.029 "name": "BaseBdev2", 00:09:32.029 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:32.029 "is_configured": true, 00:09:32.029 "data_offset": 0, 00:09:32.029 "data_size": 65536 00:09:32.029 }, 00:09:32.029 { 00:09:32.029 "name": "BaseBdev3", 00:09:32.029 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:32.029 "is_configured": true, 00:09:32.029 "data_offset": 0, 00:09:32.029 "data_size": 65536 00:09:32.029 } 00:09:32.029 ] 00:09:32.029 }' 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.029 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.290 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.290 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.290 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.290 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.290 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.550 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:32.550 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.550 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.550 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 16:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:32.550 16:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15ecf4da-10ca-47e4-8cdf-3a6133540fa1 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 [2024-09-28 16:10:47.068167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:32.550 [2024-09-28 16:10:47.068295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:32.550 [2024-09-28 16:10:47.068328] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:32.550 [2024-09-28 16:10:47.068675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:32.550 [2024-09-28 16:10:47.068899] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:32.550 [2024-09-28 16:10:47.068938] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:32.550 [2024-09-28 16:10:47.069215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.550 NewBaseBdev 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.550 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 [ 00:09:32.550 { 00:09:32.550 "name": "NewBaseBdev", 00:09:32.550 "aliases": [ 00:09:32.550 "15ecf4da-10ca-47e4-8cdf-3a6133540fa1" 00:09:32.550 ], 00:09:32.550 "product_name": "Malloc disk", 00:09:32.551 "block_size": 512, 00:09:32.551 "num_blocks": 65536, 00:09:32.551 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:32.551 "assigned_rate_limits": { 00:09:32.551 "rw_ios_per_sec": 0, 00:09:32.551 "rw_mbytes_per_sec": 0, 00:09:32.551 "r_mbytes_per_sec": 0, 00:09:32.551 "w_mbytes_per_sec": 0 00:09:32.551 }, 00:09:32.551 "claimed": true, 00:09:32.551 "claim_type": "exclusive_write", 00:09:32.551 "zoned": false, 00:09:32.551 "supported_io_types": { 00:09:32.551 "read": true, 00:09:32.551 "write": true, 00:09:32.551 "unmap": true, 00:09:32.551 "flush": true, 00:09:32.551 "reset": true, 00:09:32.551 "nvme_admin": false, 00:09:32.551 "nvme_io": false, 00:09:32.551 "nvme_io_md": false, 00:09:32.551 "write_zeroes": true, 00:09:32.551 "zcopy": true, 00:09:32.551 "get_zone_info": false, 00:09:32.551 "zone_management": false, 00:09:32.551 "zone_append": false, 00:09:32.551 "compare": false, 00:09:32.551 "compare_and_write": false, 00:09:32.551 "abort": true, 00:09:32.551 "seek_hole": false, 00:09:32.551 "seek_data": false, 00:09:32.551 "copy": true, 00:09:32.551 "nvme_iov_md": false 00:09:32.551 }, 00:09:32.551 "memory_domains": [ 00:09:32.551 { 00:09:32.551 "dma_device_id": "system", 00:09:32.551 "dma_device_type": 1 00:09:32.551 }, 00:09:32.551 { 00:09:32.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.551 "dma_device_type": 2 00:09:32.551 } 00:09:32.551 ], 00:09:32.551 "driver_specific": {} 00:09:32.551 } 00:09:32.551 ] 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.551 "name": "Existed_Raid", 00:09:32.551 "uuid": "445f01f0-a5ac-40cd-89be-bba51385d2dd", 00:09:32.551 "strip_size_kb": 64, 00:09:32.551 "state": "online", 00:09:32.551 "raid_level": "concat", 00:09:32.551 "superblock": false, 00:09:32.551 "num_base_bdevs": 3, 00:09:32.551 "num_base_bdevs_discovered": 3, 00:09:32.551 "num_base_bdevs_operational": 3, 00:09:32.551 "base_bdevs_list": [ 00:09:32.551 { 00:09:32.551 "name": "NewBaseBdev", 00:09:32.551 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:32.551 "is_configured": true, 00:09:32.551 "data_offset": 0, 00:09:32.551 "data_size": 65536 00:09:32.551 }, 00:09:32.551 { 00:09:32.551 "name": "BaseBdev2", 00:09:32.551 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:32.551 "is_configured": true, 00:09:32.551 "data_offset": 0, 00:09:32.551 "data_size": 65536 00:09:32.551 }, 00:09:32.551 { 00:09:32.551 "name": "BaseBdev3", 00:09:32.551 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:32.551 "is_configured": true, 00:09:32.551 "data_offset": 0, 00:09:32.551 "data_size": 65536 00:09:32.551 } 00:09:32.551 ] 00:09:32.551 }' 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.551 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.122 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.123 [2024-09-28 16:10:47.559671] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.123 "name": "Existed_Raid", 00:09:33.123 "aliases": [ 00:09:33.123 "445f01f0-a5ac-40cd-89be-bba51385d2dd" 00:09:33.123 ], 00:09:33.123 "product_name": "Raid Volume", 00:09:33.123 "block_size": 512, 00:09:33.123 "num_blocks": 196608, 00:09:33.123 "uuid": "445f01f0-a5ac-40cd-89be-bba51385d2dd", 00:09:33.123 "assigned_rate_limits": { 00:09:33.123 "rw_ios_per_sec": 0, 00:09:33.123 "rw_mbytes_per_sec": 0, 00:09:33.123 "r_mbytes_per_sec": 0, 00:09:33.123 "w_mbytes_per_sec": 0 00:09:33.123 }, 00:09:33.123 "claimed": false, 00:09:33.123 "zoned": false, 00:09:33.123 "supported_io_types": { 00:09:33.123 "read": true, 00:09:33.123 "write": true, 00:09:33.123 "unmap": true, 00:09:33.123 "flush": true, 00:09:33.123 "reset": true, 00:09:33.123 "nvme_admin": false, 00:09:33.123 "nvme_io": false, 00:09:33.123 "nvme_io_md": false, 00:09:33.123 "write_zeroes": true, 00:09:33.123 "zcopy": false, 00:09:33.123 "get_zone_info": false, 00:09:33.123 "zone_management": false, 00:09:33.123 "zone_append": false, 00:09:33.123 "compare": false, 00:09:33.123 "compare_and_write": false, 00:09:33.123 "abort": false, 00:09:33.123 "seek_hole": false, 00:09:33.123 "seek_data": false, 00:09:33.123 "copy": false, 00:09:33.123 "nvme_iov_md": false 00:09:33.123 }, 00:09:33.123 "memory_domains": [ 00:09:33.123 { 00:09:33.123 "dma_device_id": "system", 00:09:33.123 "dma_device_type": 1 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.123 "dma_device_type": 2 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "system", 00:09:33.123 "dma_device_type": 1 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.123 "dma_device_type": 2 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "system", 00:09:33.123 "dma_device_type": 1 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.123 "dma_device_type": 2 00:09:33.123 } 00:09:33.123 ], 00:09:33.123 "driver_specific": { 00:09:33.123 "raid": { 00:09:33.123 "uuid": "445f01f0-a5ac-40cd-89be-bba51385d2dd", 00:09:33.123 "strip_size_kb": 64, 00:09:33.123 "state": "online", 00:09:33.123 "raid_level": "concat", 00:09:33.123 "superblock": false, 00:09:33.123 "num_base_bdevs": 3, 00:09:33.123 "num_base_bdevs_discovered": 3, 00:09:33.123 "num_base_bdevs_operational": 3, 00:09:33.123 "base_bdevs_list": [ 00:09:33.123 { 00:09:33.123 "name": "NewBaseBdev", 00:09:33.123 "uuid": "15ecf4da-10ca-47e4-8cdf-3a6133540fa1", 00:09:33.123 "is_configured": true, 00:09:33.123 "data_offset": 0, 00:09:33.123 "data_size": 65536 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "name": "BaseBdev2", 00:09:33.123 "uuid": "e99092c6-8aa7-47b1-9fa1-08b12fe702bc", 00:09:33.123 "is_configured": true, 00:09:33.123 "data_offset": 0, 00:09:33.123 "data_size": 65536 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "name": "BaseBdev3", 00:09:33.123 "uuid": "245b0ba8-d67c-4e6a-a01e-6fae9326acd9", 00:09:33.123 "is_configured": true, 00:09:33.123 "data_offset": 0, 00:09:33.123 "data_size": 65536 00:09:33.123 } 00:09:33.123 ] 00:09:33.123 } 00:09:33.123 } 00:09:33.123 }' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:33.123 BaseBdev2 00:09:33.123 BaseBdev3' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.123 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.383 [2024-09-28 16:10:47.818932] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.383 [2024-09-28 16:10:47.819000] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.383 [2024-09-28 16:10:47.819120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.383 [2024-09-28 16:10:47.819196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.383 [2024-09-28 16:10:47.819267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65616 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65616 ']' 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65616 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65616 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65616' 00:09:33.383 killing process with pid 65616 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65616 00:09:33.383 [2024-09-28 16:10:47.870675] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.383 16:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65616 00:09:33.642 [2024-09-28 16:10:48.191415] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:35.023 00:09:35.023 real 0m10.677s 00:09:35.023 user 0m16.579s 00:09:35.023 sys 0m1.999s 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.023 ************************************ 00:09:35.023 END TEST raid_state_function_test 00:09:35.023 ************************************ 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.023 16:10:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:35.023 16:10:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:35.023 16:10:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.023 16:10:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.023 ************************************ 00:09:35.023 START TEST raid_state_function_test_sb 00:09:35.023 ************************************ 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.023 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66247 00:09:35.024 Process raid pid: 66247 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66247' 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66247 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66247 ']' 00:09:35.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.024 16:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.024 [2024-09-28 16:10:49.688174] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:35.024 [2024-09-28 16:10:49.688384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.284 [2024-09-28 16:10:49.851950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.544 [2024-09-28 16:10:50.091443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.804 [2024-09-28 16:10:50.325322] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.804 [2024-09-28 16:10:50.325361] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 [2024-09-28 16:10:50.517777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.064 [2024-09-28 16:10:50.517911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.064 [2024-09-28 16:10:50.517928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.064 [2024-09-28 16:10:50.517938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.064 [2024-09-28 16:10:50.517944] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.064 [2024-09-28 16:10:50.517954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.064 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.065 "name": "Existed_Raid", 00:09:36.065 "uuid": "c1d1f3ab-4f65-4353-9c78-b4b50ec4b9fb", 00:09:36.065 "strip_size_kb": 64, 00:09:36.065 "state": "configuring", 00:09:36.065 "raid_level": "concat", 00:09:36.065 "superblock": true, 00:09:36.065 "num_base_bdevs": 3, 00:09:36.065 "num_base_bdevs_discovered": 0, 00:09:36.065 "num_base_bdevs_operational": 3, 00:09:36.065 "base_bdevs_list": [ 00:09:36.065 { 00:09:36.065 "name": "BaseBdev1", 00:09:36.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.065 "is_configured": false, 00:09:36.065 "data_offset": 0, 00:09:36.065 "data_size": 0 00:09:36.065 }, 00:09:36.065 { 00:09:36.065 "name": "BaseBdev2", 00:09:36.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.065 "is_configured": false, 00:09:36.065 "data_offset": 0, 00:09:36.065 "data_size": 0 00:09:36.065 }, 00:09:36.065 { 00:09:36.065 "name": "BaseBdev3", 00:09:36.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.065 "is_configured": false, 00:09:36.065 "data_offset": 0, 00:09:36.065 "data_size": 0 00:09:36.065 } 00:09:36.065 ] 00:09:36.065 }' 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.065 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 [2024-09-28 16:10:50.948931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.325 [2024-09-28 16:10:50.949012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 [2024-09-28 16:10:50.960935] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.325 [2024-09-28 16:10:50.961025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.325 [2024-09-28 16:10:50.961051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.325 [2024-09-28 16:10:50.961074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.325 [2024-09-28 16:10:50.961091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.325 [2024-09-28 16:10:50.961111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.325 16:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.585 [2024-09-28 16:10:51.045319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.585 BaseBdev1 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.585 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.585 [ 00:09:36.585 { 00:09:36.585 "name": "BaseBdev1", 00:09:36.585 "aliases": [ 00:09:36.585 "43f4aace-82f1-4a2b-86af-234476511a1e" 00:09:36.585 ], 00:09:36.585 "product_name": "Malloc disk", 00:09:36.585 "block_size": 512, 00:09:36.585 "num_blocks": 65536, 00:09:36.585 "uuid": "43f4aace-82f1-4a2b-86af-234476511a1e", 00:09:36.585 "assigned_rate_limits": { 00:09:36.585 "rw_ios_per_sec": 0, 00:09:36.585 "rw_mbytes_per_sec": 0, 00:09:36.585 "r_mbytes_per_sec": 0, 00:09:36.585 "w_mbytes_per_sec": 0 00:09:36.585 }, 00:09:36.585 "claimed": true, 00:09:36.585 "claim_type": "exclusive_write", 00:09:36.585 "zoned": false, 00:09:36.585 "supported_io_types": { 00:09:36.585 "read": true, 00:09:36.585 "write": true, 00:09:36.585 "unmap": true, 00:09:36.585 "flush": true, 00:09:36.585 "reset": true, 00:09:36.585 "nvme_admin": false, 00:09:36.586 "nvme_io": false, 00:09:36.586 "nvme_io_md": false, 00:09:36.586 "write_zeroes": true, 00:09:36.586 "zcopy": true, 00:09:36.586 "get_zone_info": false, 00:09:36.586 "zone_management": false, 00:09:36.586 "zone_append": false, 00:09:36.586 "compare": false, 00:09:36.586 "compare_and_write": false, 00:09:36.586 "abort": true, 00:09:36.586 "seek_hole": false, 00:09:36.586 "seek_data": false, 00:09:36.586 "copy": true, 00:09:36.586 "nvme_iov_md": false 00:09:36.586 }, 00:09:36.586 "memory_domains": [ 00:09:36.586 { 00:09:36.586 "dma_device_id": "system", 00:09:36.586 "dma_device_type": 1 00:09:36.586 }, 00:09:36.586 { 00:09:36.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.586 "dma_device_type": 2 00:09:36.586 } 00:09:36.586 ], 00:09:36.586 "driver_specific": {} 00:09:36.586 } 00:09:36.586 ] 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.586 "name": "Existed_Raid", 00:09:36.586 "uuid": "8a8af2ab-d179-4ae1-8cea-c4a52232eeb2", 00:09:36.586 "strip_size_kb": 64, 00:09:36.586 "state": "configuring", 00:09:36.586 "raid_level": "concat", 00:09:36.586 "superblock": true, 00:09:36.586 "num_base_bdevs": 3, 00:09:36.586 "num_base_bdevs_discovered": 1, 00:09:36.586 "num_base_bdevs_operational": 3, 00:09:36.586 "base_bdevs_list": [ 00:09:36.586 { 00:09:36.586 "name": "BaseBdev1", 00:09:36.586 "uuid": "43f4aace-82f1-4a2b-86af-234476511a1e", 00:09:36.586 "is_configured": true, 00:09:36.586 "data_offset": 2048, 00:09:36.586 "data_size": 63488 00:09:36.586 }, 00:09:36.586 { 00:09:36.586 "name": "BaseBdev2", 00:09:36.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.586 "is_configured": false, 00:09:36.586 "data_offset": 0, 00:09:36.586 "data_size": 0 00:09:36.586 }, 00:09:36.586 { 00:09:36.586 "name": "BaseBdev3", 00:09:36.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.586 "is_configured": false, 00:09:36.586 "data_offset": 0, 00:09:36.586 "data_size": 0 00:09:36.586 } 00:09:36.586 ] 00:09:36.586 }' 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.586 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.156 [2024-09-28 16:10:51.576403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.156 [2024-09-28 16:10:51.576455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.156 [2024-09-28 16:10:51.588427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.156 [2024-09-28 16:10:51.590554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.156 [2024-09-28 16:10:51.590599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.156 [2024-09-28 16:10:51.590610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.156 [2024-09-28 16:10:51.590619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.156 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.157 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.157 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.157 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.157 "name": "Existed_Raid", 00:09:37.157 "uuid": "e0843ccc-7c7d-4f38-a851-31997b39c841", 00:09:37.157 "strip_size_kb": 64, 00:09:37.157 "state": "configuring", 00:09:37.157 "raid_level": "concat", 00:09:37.157 "superblock": true, 00:09:37.157 "num_base_bdevs": 3, 00:09:37.157 "num_base_bdevs_discovered": 1, 00:09:37.157 "num_base_bdevs_operational": 3, 00:09:37.157 "base_bdevs_list": [ 00:09:37.157 { 00:09:37.157 "name": "BaseBdev1", 00:09:37.157 "uuid": "43f4aace-82f1-4a2b-86af-234476511a1e", 00:09:37.157 "is_configured": true, 00:09:37.157 "data_offset": 2048, 00:09:37.157 "data_size": 63488 00:09:37.157 }, 00:09:37.157 { 00:09:37.157 "name": "BaseBdev2", 00:09:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.157 "is_configured": false, 00:09:37.157 "data_offset": 0, 00:09:37.157 "data_size": 0 00:09:37.157 }, 00:09:37.157 { 00:09:37.157 "name": "BaseBdev3", 00:09:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.157 "is_configured": false, 00:09:37.157 "data_offset": 0, 00:09:37.157 "data_size": 0 00:09:37.157 } 00:09:37.157 ] 00:09:37.157 }' 00:09:37.157 16:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.157 16:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.416 [2024-09-28 16:10:52.090744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.416 BaseBdev2 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.416 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.675 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.675 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.675 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.675 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.675 [ 00:09:37.675 { 00:09:37.675 "name": "BaseBdev2", 00:09:37.675 "aliases": [ 00:09:37.676 "a6072e78-4e0f-49d7-83ce-922b649b7a5b" 00:09:37.676 ], 00:09:37.676 "product_name": "Malloc disk", 00:09:37.676 "block_size": 512, 00:09:37.676 "num_blocks": 65536, 00:09:37.676 "uuid": "a6072e78-4e0f-49d7-83ce-922b649b7a5b", 00:09:37.676 "assigned_rate_limits": { 00:09:37.676 "rw_ios_per_sec": 0, 00:09:37.676 "rw_mbytes_per_sec": 0, 00:09:37.676 "r_mbytes_per_sec": 0, 00:09:37.676 "w_mbytes_per_sec": 0 00:09:37.676 }, 00:09:37.676 "claimed": true, 00:09:37.676 "claim_type": "exclusive_write", 00:09:37.676 "zoned": false, 00:09:37.676 "supported_io_types": { 00:09:37.676 "read": true, 00:09:37.676 "write": true, 00:09:37.676 "unmap": true, 00:09:37.676 "flush": true, 00:09:37.676 "reset": true, 00:09:37.676 "nvme_admin": false, 00:09:37.676 "nvme_io": false, 00:09:37.676 "nvme_io_md": false, 00:09:37.676 "write_zeroes": true, 00:09:37.676 "zcopy": true, 00:09:37.676 "get_zone_info": false, 00:09:37.676 "zone_management": false, 00:09:37.676 "zone_append": false, 00:09:37.676 "compare": false, 00:09:37.676 "compare_and_write": false, 00:09:37.676 "abort": true, 00:09:37.676 "seek_hole": false, 00:09:37.676 "seek_data": false, 00:09:37.676 "copy": true, 00:09:37.676 "nvme_iov_md": false 00:09:37.676 }, 00:09:37.676 "memory_domains": [ 00:09:37.676 { 00:09:37.676 "dma_device_id": "system", 00:09:37.676 "dma_device_type": 1 00:09:37.676 }, 00:09:37.676 { 00:09:37.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.676 "dma_device_type": 2 00:09:37.676 } 00:09:37.676 ], 00:09:37.676 "driver_specific": {} 00:09:37.676 } 00:09:37.676 ] 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.676 "name": "Existed_Raid", 00:09:37.676 "uuid": "e0843ccc-7c7d-4f38-a851-31997b39c841", 00:09:37.676 "strip_size_kb": 64, 00:09:37.676 "state": "configuring", 00:09:37.676 "raid_level": "concat", 00:09:37.676 "superblock": true, 00:09:37.676 "num_base_bdevs": 3, 00:09:37.676 "num_base_bdevs_discovered": 2, 00:09:37.676 "num_base_bdevs_operational": 3, 00:09:37.676 "base_bdevs_list": [ 00:09:37.676 { 00:09:37.676 "name": "BaseBdev1", 00:09:37.676 "uuid": "43f4aace-82f1-4a2b-86af-234476511a1e", 00:09:37.676 "is_configured": true, 00:09:37.676 "data_offset": 2048, 00:09:37.676 "data_size": 63488 00:09:37.676 }, 00:09:37.676 { 00:09:37.676 "name": "BaseBdev2", 00:09:37.676 "uuid": "a6072e78-4e0f-49d7-83ce-922b649b7a5b", 00:09:37.676 "is_configured": true, 00:09:37.676 "data_offset": 2048, 00:09:37.676 "data_size": 63488 00:09:37.676 }, 00:09:37.676 { 00:09:37.676 "name": "BaseBdev3", 00:09:37.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.676 "is_configured": false, 00:09:37.676 "data_offset": 0, 00:09:37.676 "data_size": 0 00:09:37.676 } 00:09:37.676 ] 00:09:37.676 }' 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.676 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.935 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.936 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.936 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.196 [2024-09-28 16:10:52.632396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.196 [2024-09-28 16:10:52.632677] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.196 [2024-09-28 16:10:52.632703] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:38.196 [2024-09-28 16:10:52.632979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:38.196 BaseBdev3 00:09:38.196 [2024-09-28 16:10:52.633148] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.196 [2024-09-28 16:10:52.633165] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:38.196 [2024-09-28 16:10:52.633359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.196 [ 00:09:38.196 { 00:09:38.196 "name": "BaseBdev3", 00:09:38.196 "aliases": [ 00:09:38.196 "c08e6a48-944d-488c-819d-3292df2a797e" 00:09:38.196 ], 00:09:38.196 "product_name": "Malloc disk", 00:09:38.196 "block_size": 512, 00:09:38.196 "num_blocks": 65536, 00:09:38.196 "uuid": "c08e6a48-944d-488c-819d-3292df2a797e", 00:09:38.196 "assigned_rate_limits": { 00:09:38.196 "rw_ios_per_sec": 0, 00:09:38.196 "rw_mbytes_per_sec": 0, 00:09:38.196 "r_mbytes_per_sec": 0, 00:09:38.196 "w_mbytes_per_sec": 0 00:09:38.196 }, 00:09:38.196 "claimed": true, 00:09:38.196 "claim_type": "exclusive_write", 00:09:38.196 "zoned": false, 00:09:38.196 "supported_io_types": { 00:09:38.196 "read": true, 00:09:38.196 "write": true, 00:09:38.196 "unmap": true, 00:09:38.196 "flush": true, 00:09:38.196 "reset": true, 00:09:38.196 "nvme_admin": false, 00:09:38.196 "nvme_io": false, 00:09:38.196 "nvme_io_md": false, 00:09:38.196 "write_zeroes": true, 00:09:38.196 "zcopy": true, 00:09:38.196 "get_zone_info": false, 00:09:38.196 "zone_management": false, 00:09:38.196 "zone_append": false, 00:09:38.196 "compare": false, 00:09:38.196 "compare_and_write": false, 00:09:38.196 "abort": true, 00:09:38.196 "seek_hole": false, 00:09:38.196 "seek_data": false, 00:09:38.196 "copy": true, 00:09:38.196 "nvme_iov_md": false 00:09:38.196 }, 00:09:38.196 "memory_domains": [ 00:09:38.196 { 00:09:38.196 "dma_device_id": "system", 00:09:38.196 "dma_device_type": 1 00:09:38.196 }, 00:09:38.196 { 00:09:38.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.196 "dma_device_type": 2 00:09:38.196 } 00:09:38.196 ], 00:09:38.196 "driver_specific": {} 00:09:38.196 } 00:09:38.196 ] 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.196 "name": "Existed_Raid", 00:09:38.196 "uuid": "e0843ccc-7c7d-4f38-a851-31997b39c841", 00:09:38.196 "strip_size_kb": 64, 00:09:38.196 "state": "online", 00:09:38.196 "raid_level": "concat", 00:09:38.196 "superblock": true, 00:09:38.196 "num_base_bdevs": 3, 00:09:38.196 "num_base_bdevs_discovered": 3, 00:09:38.196 "num_base_bdevs_operational": 3, 00:09:38.196 "base_bdevs_list": [ 00:09:38.196 { 00:09:38.196 "name": "BaseBdev1", 00:09:38.196 "uuid": "43f4aace-82f1-4a2b-86af-234476511a1e", 00:09:38.196 "is_configured": true, 00:09:38.196 "data_offset": 2048, 00:09:38.196 "data_size": 63488 00:09:38.196 }, 00:09:38.196 { 00:09:38.196 "name": "BaseBdev2", 00:09:38.196 "uuid": "a6072e78-4e0f-49d7-83ce-922b649b7a5b", 00:09:38.196 "is_configured": true, 00:09:38.196 "data_offset": 2048, 00:09:38.196 "data_size": 63488 00:09:38.196 }, 00:09:38.196 { 00:09:38.196 "name": "BaseBdev3", 00:09:38.196 "uuid": "c08e6a48-944d-488c-819d-3292df2a797e", 00:09:38.196 "is_configured": true, 00:09:38.196 "data_offset": 2048, 00:09:38.196 "data_size": 63488 00:09:38.196 } 00:09:38.196 ] 00:09:38.196 }' 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.196 16:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.456 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.456 [2024-09-28 16:10:53.123860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.716 "name": "Existed_Raid", 00:09:38.716 "aliases": [ 00:09:38.716 "e0843ccc-7c7d-4f38-a851-31997b39c841" 00:09:38.716 ], 00:09:38.716 "product_name": "Raid Volume", 00:09:38.716 "block_size": 512, 00:09:38.716 "num_blocks": 190464, 00:09:38.716 "uuid": "e0843ccc-7c7d-4f38-a851-31997b39c841", 00:09:38.716 "assigned_rate_limits": { 00:09:38.716 "rw_ios_per_sec": 0, 00:09:38.716 "rw_mbytes_per_sec": 0, 00:09:38.716 "r_mbytes_per_sec": 0, 00:09:38.716 "w_mbytes_per_sec": 0 00:09:38.716 }, 00:09:38.716 "claimed": false, 00:09:38.716 "zoned": false, 00:09:38.716 "supported_io_types": { 00:09:38.716 "read": true, 00:09:38.716 "write": true, 00:09:38.716 "unmap": true, 00:09:38.716 "flush": true, 00:09:38.716 "reset": true, 00:09:38.716 "nvme_admin": false, 00:09:38.716 "nvme_io": false, 00:09:38.716 "nvme_io_md": false, 00:09:38.716 "write_zeroes": true, 00:09:38.716 "zcopy": false, 00:09:38.716 "get_zone_info": false, 00:09:38.716 "zone_management": false, 00:09:38.716 "zone_append": false, 00:09:38.716 "compare": false, 00:09:38.716 "compare_and_write": false, 00:09:38.716 "abort": false, 00:09:38.716 "seek_hole": false, 00:09:38.716 "seek_data": false, 00:09:38.716 "copy": false, 00:09:38.716 "nvme_iov_md": false 00:09:38.716 }, 00:09:38.716 "memory_domains": [ 00:09:38.716 { 00:09:38.716 "dma_device_id": "system", 00:09:38.716 "dma_device_type": 1 00:09:38.716 }, 00:09:38.716 { 00:09:38.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.716 "dma_device_type": 2 00:09:38.716 }, 00:09:38.716 { 00:09:38.716 "dma_device_id": "system", 00:09:38.716 "dma_device_type": 1 00:09:38.716 }, 00:09:38.716 { 00:09:38.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.716 "dma_device_type": 2 00:09:38.716 }, 00:09:38.716 { 00:09:38.716 "dma_device_id": "system", 00:09:38.716 "dma_device_type": 1 00:09:38.716 }, 00:09:38.716 { 00:09:38.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.716 "dma_device_type": 2 00:09:38.716 } 00:09:38.716 ], 00:09:38.716 "driver_specific": { 00:09:38.716 "raid": { 00:09:38.716 "uuid": "e0843ccc-7c7d-4f38-a851-31997b39c841", 00:09:38.716 "strip_size_kb": 64, 00:09:38.716 "state": "online", 00:09:38.716 "raid_level": "concat", 00:09:38.716 "superblock": true, 00:09:38.716 "num_base_bdevs": 3, 00:09:38.716 "num_base_bdevs_discovered": 3, 00:09:38.716 "num_base_bdevs_operational": 3, 00:09:38.716 "base_bdevs_list": [ 00:09:38.716 { 00:09:38.716 "name": "BaseBdev1", 00:09:38.716 "uuid": "43f4aace-82f1-4a2b-86af-234476511a1e", 00:09:38.716 "is_configured": true, 00:09:38.716 "data_offset": 2048, 00:09:38.716 "data_size": 63488 00:09:38.716 }, 00:09:38.716 { 00:09:38.716 "name": "BaseBdev2", 00:09:38.716 "uuid": "a6072e78-4e0f-49d7-83ce-922b649b7a5b", 00:09:38.716 "is_configured": true, 00:09:38.716 "data_offset": 2048, 00:09:38.716 "data_size": 63488 00:09:38.716 }, 00:09:38.716 { 00:09:38.716 "name": "BaseBdev3", 00:09:38.716 "uuid": "c08e6a48-944d-488c-819d-3292df2a797e", 00:09:38.716 "is_configured": true, 00:09:38.716 "data_offset": 2048, 00:09:38.716 "data_size": 63488 00:09:38.716 } 00:09:38.716 ] 00:09:38.716 } 00:09:38.716 } 00:09:38.716 }' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.716 BaseBdev2 00:09:38.716 BaseBdev3' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.716 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 [2024-09-28 16:10:53.387126] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.716 [2024-09-28 16:10:53.387154] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.716 [2024-09-28 16:10:53.387205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.976 "name": "Existed_Raid", 00:09:38.976 "uuid": "e0843ccc-7c7d-4f38-a851-31997b39c841", 00:09:38.976 "strip_size_kb": 64, 00:09:38.976 "state": "offline", 00:09:38.976 "raid_level": "concat", 00:09:38.976 "superblock": true, 00:09:38.976 "num_base_bdevs": 3, 00:09:38.976 "num_base_bdevs_discovered": 2, 00:09:38.976 "num_base_bdevs_operational": 2, 00:09:38.976 "base_bdevs_list": [ 00:09:38.976 { 00:09:38.976 "name": null, 00:09:38.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.976 "is_configured": false, 00:09:38.976 "data_offset": 0, 00:09:38.976 "data_size": 63488 00:09:38.976 }, 00:09:38.976 { 00:09:38.976 "name": "BaseBdev2", 00:09:38.976 "uuid": "a6072e78-4e0f-49d7-83ce-922b649b7a5b", 00:09:38.976 "is_configured": true, 00:09:38.976 "data_offset": 2048, 00:09:38.976 "data_size": 63488 00:09:38.976 }, 00:09:38.976 { 00:09:38.976 "name": "BaseBdev3", 00:09:38.976 "uuid": "c08e6a48-944d-488c-819d-3292df2a797e", 00:09:38.976 "is_configured": true, 00:09:38.976 "data_offset": 2048, 00:09:38.976 "data_size": 63488 00:09:38.976 } 00:09:38.976 ] 00:09:38.976 }' 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.976 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.235 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.235 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.494 16:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.494 [2024-09-28 16:10:53.974297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.494 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.494 [2024-09-28 16:10:54.127809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.495 [2024-09-28 16:10:54.127931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.754 BaseBdev2 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.754 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 [ 00:09:39.755 { 00:09:39.755 "name": "BaseBdev2", 00:09:39.755 "aliases": [ 00:09:39.755 "6787f73a-b2af-4991-a25b-161973bebdaa" 00:09:39.755 ], 00:09:39.755 "product_name": "Malloc disk", 00:09:39.755 "block_size": 512, 00:09:39.755 "num_blocks": 65536, 00:09:39.755 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:39.755 "assigned_rate_limits": { 00:09:39.755 "rw_ios_per_sec": 0, 00:09:39.755 "rw_mbytes_per_sec": 0, 00:09:39.755 "r_mbytes_per_sec": 0, 00:09:39.755 "w_mbytes_per_sec": 0 00:09:39.755 }, 00:09:39.755 "claimed": false, 00:09:39.755 "zoned": false, 00:09:39.755 "supported_io_types": { 00:09:39.755 "read": true, 00:09:39.755 "write": true, 00:09:39.755 "unmap": true, 00:09:39.755 "flush": true, 00:09:39.755 "reset": true, 00:09:39.755 "nvme_admin": false, 00:09:39.755 "nvme_io": false, 00:09:39.755 "nvme_io_md": false, 00:09:39.755 "write_zeroes": true, 00:09:39.755 "zcopy": true, 00:09:39.755 "get_zone_info": false, 00:09:39.755 "zone_management": false, 00:09:39.755 "zone_append": false, 00:09:39.755 "compare": false, 00:09:39.755 "compare_and_write": false, 00:09:39.755 "abort": true, 00:09:39.755 "seek_hole": false, 00:09:39.755 "seek_data": false, 00:09:39.755 "copy": true, 00:09:39.755 "nvme_iov_md": false 00:09:39.755 }, 00:09:39.755 "memory_domains": [ 00:09:39.755 { 00:09:39.755 "dma_device_id": "system", 00:09:39.755 "dma_device_type": 1 00:09:39.755 }, 00:09:39.755 { 00:09:39.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.755 "dma_device_type": 2 00:09:39.755 } 00:09:39.755 ], 00:09:39.755 "driver_specific": {} 00:09:39.755 } 00:09:39.755 ] 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 BaseBdev3 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.755 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 [ 00:09:39.755 { 00:09:39.755 "name": "BaseBdev3", 00:09:39.755 "aliases": [ 00:09:39.755 "88ef8672-acb0-43af-88f5-d0422635c49d" 00:09:39.755 ], 00:09:39.755 "product_name": "Malloc disk", 00:09:39.755 "block_size": 512, 00:09:39.755 "num_blocks": 65536, 00:09:39.755 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:39.755 "assigned_rate_limits": { 00:09:39.755 "rw_ios_per_sec": 0, 00:09:39.755 "rw_mbytes_per_sec": 0, 00:09:39.755 "r_mbytes_per_sec": 0, 00:09:39.755 "w_mbytes_per_sec": 0 00:09:39.755 }, 00:09:39.755 "claimed": false, 00:09:39.755 "zoned": false, 00:09:39.755 "supported_io_types": { 00:09:39.755 "read": true, 00:09:39.755 "write": true, 00:09:39.755 "unmap": true, 00:09:39.755 "flush": true, 00:09:40.015 "reset": true, 00:09:40.015 "nvme_admin": false, 00:09:40.015 "nvme_io": false, 00:09:40.015 "nvme_io_md": false, 00:09:40.015 "write_zeroes": true, 00:09:40.015 "zcopy": true, 00:09:40.015 "get_zone_info": false, 00:09:40.015 "zone_management": false, 00:09:40.015 "zone_append": false, 00:09:40.015 "compare": false, 00:09:40.015 "compare_and_write": false, 00:09:40.015 "abort": true, 00:09:40.015 "seek_hole": false, 00:09:40.015 "seek_data": false, 00:09:40.015 "copy": true, 00:09:40.015 "nvme_iov_md": false 00:09:40.015 }, 00:09:40.015 "memory_domains": [ 00:09:40.015 { 00:09:40.015 "dma_device_id": "system", 00:09:40.015 "dma_device_type": 1 00:09:40.015 }, 00:09:40.015 { 00:09:40.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.015 "dma_device_type": 2 00:09:40.015 } 00:09:40.015 ], 00:09:40.015 "driver_specific": {} 00:09:40.015 } 00:09:40.015 ] 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.015 [2024-09-28 16:10:54.450893] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.015 [2024-09-28 16:10:54.450979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.015 [2024-09-28 16:10:54.451007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.015 [2024-09-28 16:10:54.453055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.015 "name": "Existed_Raid", 00:09:40.015 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:40.015 "strip_size_kb": 64, 00:09:40.015 "state": "configuring", 00:09:40.015 "raid_level": "concat", 00:09:40.015 "superblock": true, 00:09:40.015 "num_base_bdevs": 3, 00:09:40.015 "num_base_bdevs_discovered": 2, 00:09:40.015 "num_base_bdevs_operational": 3, 00:09:40.015 "base_bdevs_list": [ 00:09:40.015 { 00:09:40.015 "name": "BaseBdev1", 00:09:40.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.015 "is_configured": false, 00:09:40.015 "data_offset": 0, 00:09:40.015 "data_size": 0 00:09:40.015 }, 00:09:40.015 { 00:09:40.015 "name": "BaseBdev2", 00:09:40.015 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:40.015 "is_configured": true, 00:09:40.015 "data_offset": 2048, 00:09:40.015 "data_size": 63488 00:09:40.015 }, 00:09:40.015 { 00:09:40.015 "name": "BaseBdev3", 00:09:40.015 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:40.015 "is_configured": true, 00:09:40.015 "data_offset": 2048, 00:09:40.015 "data_size": 63488 00:09:40.015 } 00:09:40.015 ] 00:09:40.015 }' 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.015 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.275 [2024-09-28 16:10:54.886098] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.275 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.275 "name": "Existed_Raid", 00:09:40.275 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:40.275 "strip_size_kb": 64, 00:09:40.275 "state": "configuring", 00:09:40.275 "raid_level": "concat", 00:09:40.275 "superblock": true, 00:09:40.275 "num_base_bdevs": 3, 00:09:40.275 "num_base_bdevs_discovered": 1, 00:09:40.275 "num_base_bdevs_operational": 3, 00:09:40.275 "base_bdevs_list": [ 00:09:40.275 { 00:09:40.275 "name": "BaseBdev1", 00:09:40.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.275 "is_configured": false, 00:09:40.275 "data_offset": 0, 00:09:40.275 "data_size": 0 00:09:40.275 }, 00:09:40.275 { 00:09:40.275 "name": null, 00:09:40.275 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:40.275 "is_configured": false, 00:09:40.275 "data_offset": 0, 00:09:40.275 "data_size": 63488 00:09:40.275 }, 00:09:40.275 { 00:09:40.275 "name": "BaseBdev3", 00:09:40.275 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:40.275 "is_configured": true, 00:09:40.275 "data_offset": 2048, 00:09:40.275 "data_size": 63488 00:09:40.275 } 00:09:40.275 ] 00:09:40.275 }' 00:09:40.276 16:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.276 16:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.846 [2024-09-28 16:10:55.397898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.846 BaseBdev1 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.846 [ 00:09:40.846 { 00:09:40.846 "name": "BaseBdev1", 00:09:40.846 "aliases": [ 00:09:40.846 "8c1e4baf-f84b-4eac-a8ea-4b3011235362" 00:09:40.846 ], 00:09:40.846 "product_name": "Malloc disk", 00:09:40.846 "block_size": 512, 00:09:40.846 "num_blocks": 65536, 00:09:40.846 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:40.846 "assigned_rate_limits": { 00:09:40.846 "rw_ios_per_sec": 0, 00:09:40.846 "rw_mbytes_per_sec": 0, 00:09:40.846 "r_mbytes_per_sec": 0, 00:09:40.846 "w_mbytes_per_sec": 0 00:09:40.846 }, 00:09:40.846 "claimed": true, 00:09:40.846 "claim_type": "exclusive_write", 00:09:40.846 "zoned": false, 00:09:40.846 "supported_io_types": { 00:09:40.846 "read": true, 00:09:40.846 "write": true, 00:09:40.846 "unmap": true, 00:09:40.846 "flush": true, 00:09:40.846 "reset": true, 00:09:40.846 "nvme_admin": false, 00:09:40.846 "nvme_io": false, 00:09:40.846 "nvme_io_md": false, 00:09:40.846 "write_zeroes": true, 00:09:40.846 "zcopy": true, 00:09:40.846 "get_zone_info": false, 00:09:40.846 "zone_management": false, 00:09:40.846 "zone_append": false, 00:09:40.846 "compare": false, 00:09:40.846 "compare_and_write": false, 00:09:40.846 "abort": true, 00:09:40.846 "seek_hole": false, 00:09:40.846 "seek_data": false, 00:09:40.846 "copy": true, 00:09:40.846 "nvme_iov_md": false 00:09:40.846 }, 00:09:40.846 "memory_domains": [ 00:09:40.846 { 00:09:40.846 "dma_device_id": "system", 00:09:40.846 "dma_device_type": 1 00:09:40.846 }, 00:09:40.846 { 00:09:40.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.846 "dma_device_type": 2 00:09:40.846 } 00:09:40.846 ], 00:09:40.846 "driver_specific": {} 00:09:40.846 } 00:09:40.846 ] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.846 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.846 "name": "Existed_Raid", 00:09:40.846 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:40.846 "strip_size_kb": 64, 00:09:40.846 "state": "configuring", 00:09:40.846 "raid_level": "concat", 00:09:40.846 "superblock": true, 00:09:40.846 "num_base_bdevs": 3, 00:09:40.846 "num_base_bdevs_discovered": 2, 00:09:40.846 "num_base_bdevs_operational": 3, 00:09:40.846 "base_bdevs_list": [ 00:09:40.846 { 00:09:40.846 "name": "BaseBdev1", 00:09:40.846 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:40.846 "is_configured": true, 00:09:40.846 "data_offset": 2048, 00:09:40.846 "data_size": 63488 00:09:40.846 }, 00:09:40.846 { 00:09:40.846 "name": null, 00:09:40.846 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:40.847 "is_configured": false, 00:09:40.847 "data_offset": 0, 00:09:40.847 "data_size": 63488 00:09:40.847 }, 00:09:40.847 { 00:09:40.847 "name": "BaseBdev3", 00:09:40.847 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:40.847 "is_configured": true, 00:09:40.847 "data_offset": 2048, 00:09:40.847 "data_size": 63488 00:09:40.847 } 00:09:40.847 ] 00:09:40.847 }' 00:09:40.847 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.847 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.417 [2024-09-28 16:10:55.956960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.417 16:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.417 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.417 "name": "Existed_Raid", 00:09:41.417 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:41.417 "strip_size_kb": 64, 00:09:41.417 "state": "configuring", 00:09:41.417 "raid_level": "concat", 00:09:41.417 "superblock": true, 00:09:41.417 "num_base_bdevs": 3, 00:09:41.417 "num_base_bdevs_discovered": 1, 00:09:41.417 "num_base_bdevs_operational": 3, 00:09:41.417 "base_bdevs_list": [ 00:09:41.417 { 00:09:41.417 "name": "BaseBdev1", 00:09:41.417 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:41.417 "is_configured": true, 00:09:41.417 "data_offset": 2048, 00:09:41.417 "data_size": 63488 00:09:41.417 }, 00:09:41.417 { 00:09:41.417 "name": null, 00:09:41.417 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:41.417 "is_configured": false, 00:09:41.417 "data_offset": 0, 00:09:41.417 "data_size": 63488 00:09:41.417 }, 00:09:41.417 { 00:09:41.417 "name": null, 00:09:41.417 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:41.417 "is_configured": false, 00:09:41.417 "data_offset": 0, 00:09:41.417 "data_size": 63488 00:09:41.417 } 00:09:41.417 ] 00:09:41.417 }' 00:09:41.417 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.417 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.987 [2024-09-28 16:10:56.436175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.987 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.987 "name": "Existed_Raid", 00:09:41.987 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:41.987 "strip_size_kb": 64, 00:09:41.987 "state": "configuring", 00:09:41.987 "raid_level": "concat", 00:09:41.987 "superblock": true, 00:09:41.987 "num_base_bdevs": 3, 00:09:41.987 "num_base_bdevs_discovered": 2, 00:09:41.987 "num_base_bdevs_operational": 3, 00:09:41.987 "base_bdevs_list": [ 00:09:41.987 { 00:09:41.987 "name": "BaseBdev1", 00:09:41.987 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:41.987 "is_configured": true, 00:09:41.987 "data_offset": 2048, 00:09:41.987 "data_size": 63488 00:09:41.987 }, 00:09:41.987 { 00:09:41.987 "name": null, 00:09:41.987 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:41.987 "is_configured": false, 00:09:41.987 "data_offset": 0, 00:09:41.987 "data_size": 63488 00:09:41.987 }, 00:09:41.987 { 00:09:41.987 "name": "BaseBdev3", 00:09:41.987 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:41.987 "is_configured": true, 00:09:41.987 "data_offset": 2048, 00:09:41.987 "data_size": 63488 00:09:41.987 } 00:09:41.987 ] 00:09:41.988 }' 00:09:41.988 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.988 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.247 16:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.247 [2024-09-28 16:10:56.919417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.508 "name": "Existed_Raid", 00:09:42.508 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:42.508 "strip_size_kb": 64, 00:09:42.508 "state": "configuring", 00:09:42.508 "raid_level": "concat", 00:09:42.508 "superblock": true, 00:09:42.508 "num_base_bdevs": 3, 00:09:42.508 "num_base_bdevs_discovered": 1, 00:09:42.508 "num_base_bdevs_operational": 3, 00:09:42.508 "base_bdevs_list": [ 00:09:42.508 { 00:09:42.508 "name": null, 00:09:42.508 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:42.508 "is_configured": false, 00:09:42.508 "data_offset": 0, 00:09:42.508 "data_size": 63488 00:09:42.508 }, 00:09:42.508 { 00:09:42.508 "name": null, 00:09:42.508 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:42.508 "is_configured": false, 00:09:42.508 "data_offset": 0, 00:09:42.508 "data_size": 63488 00:09:42.508 }, 00:09:42.508 { 00:09:42.508 "name": "BaseBdev3", 00:09:42.508 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:42.508 "is_configured": true, 00:09:42.508 "data_offset": 2048, 00:09:42.508 "data_size": 63488 00:09:42.508 } 00:09:42.508 ] 00:09:42.508 }' 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.508 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.767 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.767 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.767 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.767 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.026 [2024-09-28 16:10:57.492283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.026 "name": "Existed_Raid", 00:09:43.026 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:43.026 "strip_size_kb": 64, 00:09:43.026 "state": "configuring", 00:09:43.026 "raid_level": "concat", 00:09:43.026 "superblock": true, 00:09:43.026 "num_base_bdevs": 3, 00:09:43.026 "num_base_bdevs_discovered": 2, 00:09:43.026 "num_base_bdevs_operational": 3, 00:09:43.026 "base_bdevs_list": [ 00:09:43.026 { 00:09:43.026 "name": null, 00:09:43.026 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:43.026 "is_configured": false, 00:09:43.026 "data_offset": 0, 00:09:43.026 "data_size": 63488 00:09:43.026 }, 00:09:43.026 { 00:09:43.026 "name": "BaseBdev2", 00:09:43.026 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:43.026 "is_configured": true, 00:09:43.026 "data_offset": 2048, 00:09:43.026 "data_size": 63488 00:09:43.026 }, 00:09:43.026 { 00:09:43.026 "name": "BaseBdev3", 00:09:43.026 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:43.026 "is_configured": true, 00:09:43.026 "data_offset": 2048, 00:09:43.026 "data_size": 63488 00:09:43.026 } 00:09:43.026 ] 00:09:43.026 }' 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.026 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.286 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.545 16:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.545 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8c1e4baf-f84b-4eac-a8ea-4b3011235362 00:09:43.545 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.545 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.545 [2024-09-28 16:10:58.056376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:43.545 [2024-09-28 16:10:58.056715] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:43.545 [2024-09-28 16:10:58.056775] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.545 [2024-09-28 16:10:58.057092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:43.545 [2024-09-28 16:10:58.057284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:43.545 [2024-09-28 16:10:58.057323] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:09:43.545 id_bdev 0x617000008200 00:09:43.545 [2024-09-28 16:10:58.057539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.545 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.545 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:43.545 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:43.545 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.546 [ 00:09:43.546 { 00:09:43.546 "name": "NewBaseBdev", 00:09:43.546 "aliases": [ 00:09:43.546 "8c1e4baf-f84b-4eac-a8ea-4b3011235362" 00:09:43.546 ], 00:09:43.546 "product_name": "Malloc disk", 00:09:43.546 "block_size": 512, 00:09:43.546 "num_blocks": 65536, 00:09:43.546 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:43.546 "assigned_rate_limits": { 00:09:43.546 "rw_ios_per_sec": 0, 00:09:43.546 "rw_mbytes_per_sec": 0, 00:09:43.546 "r_mbytes_per_sec": 0, 00:09:43.546 "w_mbytes_per_sec": 0 00:09:43.546 }, 00:09:43.546 "claimed": true, 00:09:43.546 "claim_type": "exclusive_write", 00:09:43.546 "zoned": false, 00:09:43.546 "supported_io_types": { 00:09:43.546 "read": true, 00:09:43.546 "write": true, 00:09:43.546 "unmap": true, 00:09:43.546 "flush": true, 00:09:43.546 "reset": true, 00:09:43.546 "nvme_admin": false, 00:09:43.546 "nvme_io": false, 00:09:43.546 "nvme_io_md": false, 00:09:43.546 "write_zeroes": true, 00:09:43.546 "zcopy": true, 00:09:43.546 "get_zone_info": false, 00:09:43.546 "zone_management": false, 00:09:43.546 "zone_append": false, 00:09:43.546 "compare": false, 00:09:43.546 "compare_and_write": false, 00:09:43.546 "abort": true, 00:09:43.546 "seek_hole": false, 00:09:43.546 "seek_data": false, 00:09:43.546 "copy": true, 00:09:43.546 "nvme_iov_md": false 00:09:43.546 }, 00:09:43.546 "memory_domains": [ 00:09:43.546 { 00:09:43.546 "dma_device_id": "system", 00:09:43.546 "dma_device_type": 1 00:09:43.546 }, 00:09:43.546 { 00:09:43.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.546 "dma_device_type": 2 00:09:43.546 } 00:09:43.546 ], 00:09:43.546 "driver_specific": {} 00:09:43.546 } 00:09:43.546 ] 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.546 "name": "Existed_Raid", 00:09:43.546 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:43.546 "strip_size_kb": 64, 00:09:43.546 "state": "online", 00:09:43.546 "raid_level": "concat", 00:09:43.546 "superblock": true, 00:09:43.546 "num_base_bdevs": 3, 00:09:43.546 "num_base_bdevs_discovered": 3, 00:09:43.546 "num_base_bdevs_operational": 3, 00:09:43.546 "base_bdevs_list": [ 00:09:43.546 { 00:09:43.546 "name": "NewBaseBdev", 00:09:43.546 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:43.546 "is_configured": true, 00:09:43.546 "data_offset": 2048, 00:09:43.546 "data_size": 63488 00:09:43.546 }, 00:09:43.546 { 00:09:43.546 "name": "BaseBdev2", 00:09:43.546 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:43.546 "is_configured": true, 00:09:43.546 "data_offset": 2048, 00:09:43.546 "data_size": 63488 00:09:43.546 }, 00:09:43.546 { 00:09:43.546 "name": "BaseBdev3", 00:09:43.546 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:43.546 "is_configured": true, 00:09:43.546 "data_offset": 2048, 00:09:43.546 "data_size": 63488 00:09:43.546 } 00:09:43.546 ] 00:09:43.546 }' 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.546 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.115 [2024-09-28 16:10:58.539858] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.115 "name": "Existed_Raid", 00:09:44.115 "aliases": [ 00:09:44.115 "3afa87fc-f448-48ca-ad2d-fa4d1dd93081" 00:09:44.115 ], 00:09:44.115 "product_name": "Raid Volume", 00:09:44.115 "block_size": 512, 00:09:44.115 "num_blocks": 190464, 00:09:44.115 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:44.115 "assigned_rate_limits": { 00:09:44.115 "rw_ios_per_sec": 0, 00:09:44.115 "rw_mbytes_per_sec": 0, 00:09:44.115 "r_mbytes_per_sec": 0, 00:09:44.115 "w_mbytes_per_sec": 0 00:09:44.115 }, 00:09:44.115 "claimed": false, 00:09:44.115 "zoned": false, 00:09:44.115 "supported_io_types": { 00:09:44.115 "read": true, 00:09:44.115 "write": true, 00:09:44.115 "unmap": true, 00:09:44.115 "flush": true, 00:09:44.115 "reset": true, 00:09:44.115 "nvme_admin": false, 00:09:44.115 "nvme_io": false, 00:09:44.115 "nvme_io_md": false, 00:09:44.115 "write_zeroes": true, 00:09:44.115 "zcopy": false, 00:09:44.115 "get_zone_info": false, 00:09:44.115 "zone_management": false, 00:09:44.115 "zone_append": false, 00:09:44.115 "compare": false, 00:09:44.115 "compare_and_write": false, 00:09:44.115 "abort": false, 00:09:44.115 "seek_hole": false, 00:09:44.115 "seek_data": false, 00:09:44.115 "copy": false, 00:09:44.115 "nvme_iov_md": false 00:09:44.115 }, 00:09:44.115 "memory_domains": [ 00:09:44.115 { 00:09:44.115 "dma_device_id": "system", 00:09:44.115 "dma_device_type": 1 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.115 "dma_device_type": 2 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "dma_device_id": "system", 00:09:44.115 "dma_device_type": 1 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.115 "dma_device_type": 2 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "dma_device_id": "system", 00:09:44.115 "dma_device_type": 1 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.115 "dma_device_type": 2 00:09:44.115 } 00:09:44.115 ], 00:09:44.115 "driver_specific": { 00:09:44.115 "raid": { 00:09:44.115 "uuid": "3afa87fc-f448-48ca-ad2d-fa4d1dd93081", 00:09:44.115 "strip_size_kb": 64, 00:09:44.115 "state": "online", 00:09:44.115 "raid_level": "concat", 00:09:44.115 "superblock": true, 00:09:44.115 "num_base_bdevs": 3, 00:09:44.115 "num_base_bdevs_discovered": 3, 00:09:44.115 "num_base_bdevs_operational": 3, 00:09:44.115 "base_bdevs_list": [ 00:09:44.115 { 00:09:44.115 "name": "NewBaseBdev", 00:09:44.115 "uuid": "8c1e4baf-f84b-4eac-a8ea-4b3011235362", 00:09:44.115 "is_configured": true, 00:09:44.115 "data_offset": 2048, 00:09:44.115 "data_size": 63488 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "name": "BaseBdev2", 00:09:44.115 "uuid": "6787f73a-b2af-4991-a25b-161973bebdaa", 00:09:44.115 "is_configured": true, 00:09:44.115 "data_offset": 2048, 00:09:44.115 "data_size": 63488 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "name": "BaseBdev3", 00:09:44.115 "uuid": "88ef8672-acb0-43af-88f5-d0422635c49d", 00:09:44.115 "is_configured": true, 00:09:44.115 "data_offset": 2048, 00:09:44.115 "data_size": 63488 00:09:44.115 } 00:09:44.115 ] 00:09:44.115 } 00:09:44.115 } 00:09:44.115 }' 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:44.115 BaseBdev2 00:09:44.115 BaseBdev3' 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.115 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.116 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.375 [2024-09-28 16:10:58.819152] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.375 [2024-09-28 16:10:58.819177] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.375 [2024-09-28 16:10:58.819271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.375 [2024-09-28 16:10:58.819324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.375 [2024-09-28 16:10:58.819391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66247 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66247 ']' 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66247 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66247 00:09:44.375 killing process with pid 66247 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66247' 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66247 00:09:44.375 [2024-09-28 16:10:58.869013] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.375 16:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66247 00:09:44.634 [2024-09-28 16:10:59.183825] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.026 16:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.026 00:09:46.026 real 0m10.918s 00:09:46.026 user 0m17.045s 00:09:46.026 sys 0m2.029s 00:09:46.026 16:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.026 16:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.026 ************************************ 00:09:46.026 END TEST raid_state_function_test_sb 00:09:46.026 ************************************ 00:09:46.026 16:11:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:46.026 16:11:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:46.026 16:11:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.026 16:11:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.026 ************************************ 00:09:46.026 START TEST raid_superblock_test 00:09:46.026 ************************************ 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:46.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66868 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66868 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66868 ']' 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.026 16:11:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.026 [2024-09-28 16:11:00.676780] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:46.026 [2024-09-28 16:11:00.676973] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66868 ] 00:09:46.301 [2024-09-28 16:11:00.841779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.560 [2024-09-28 16:11:01.088652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.819 [2024-09-28 16:11:01.312334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.819 [2024-09-28 16:11:01.312369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.819 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.079 malloc1 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.079 [2024-09-28 16:11:01.551348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.079 [2024-09-28 16:11:01.551474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.079 [2024-09-28 16:11:01.551519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.079 [2024-09-28 16:11:01.551552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.079 [2024-09-28 16:11:01.553916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.079 [2024-09-28 16:11:01.553982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.079 pt1 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.079 malloc2 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.079 [2024-09-28 16:11:01.621912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.079 [2024-09-28 16:11:01.622008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.079 [2024-09-28 16:11:01.622053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.079 [2024-09-28 16:11:01.622062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.079 [2024-09-28 16:11:01.624444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.079 [2024-09-28 16:11:01.624476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.079 pt2 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.079 malloc3 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.079 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.079 [2024-09-28 16:11:01.683529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.079 [2024-09-28 16:11:01.683621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.079 [2024-09-28 16:11:01.683660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:47.079 [2024-09-28 16:11:01.683693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.080 [2024-09-28 16:11:01.685998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.080 [2024-09-28 16:11:01.686067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.080 pt3 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.080 [2024-09-28 16:11:01.695588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.080 [2024-09-28 16:11:01.697694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.080 [2024-09-28 16:11:01.697813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.080 [2024-09-28 16:11:01.698009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:47.080 [2024-09-28 16:11:01.698058] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:47.080 [2024-09-28 16:11:01.698326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.080 [2024-09-28 16:11:01.698541] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:47.080 [2024-09-28 16:11:01.698583] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:47.080 [2024-09-28 16:11:01.698770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.080 "name": "raid_bdev1", 00:09:47.080 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:47.080 "strip_size_kb": 64, 00:09:47.080 "state": "online", 00:09:47.080 "raid_level": "concat", 00:09:47.080 "superblock": true, 00:09:47.080 "num_base_bdevs": 3, 00:09:47.080 "num_base_bdevs_discovered": 3, 00:09:47.080 "num_base_bdevs_operational": 3, 00:09:47.080 "base_bdevs_list": [ 00:09:47.080 { 00:09:47.080 "name": "pt1", 00:09:47.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.080 "is_configured": true, 00:09:47.080 "data_offset": 2048, 00:09:47.080 "data_size": 63488 00:09:47.080 }, 00:09:47.080 { 00:09:47.080 "name": "pt2", 00:09:47.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.080 "is_configured": true, 00:09:47.080 "data_offset": 2048, 00:09:47.080 "data_size": 63488 00:09:47.080 }, 00:09:47.080 { 00:09:47.080 "name": "pt3", 00:09:47.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.080 "is_configured": true, 00:09:47.080 "data_offset": 2048, 00:09:47.080 "data_size": 63488 00:09:47.080 } 00:09:47.080 ] 00:09:47.080 }' 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.080 16:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.649 [2024-09-28 16:11:02.091138] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.649 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.649 "name": "raid_bdev1", 00:09:47.649 "aliases": [ 00:09:47.649 "6803915a-7588-45e6-aace-f645cba72d1d" 00:09:47.649 ], 00:09:47.649 "product_name": "Raid Volume", 00:09:47.649 "block_size": 512, 00:09:47.649 "num_blocks": 190464, 00:09:47.649 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:47.649 "assigned_rate_limits": { 00:09:47.649 "rw_ios_per_sec": 0, 00:09:47.649 "rw_mbytes_per_sec": 0, 00:09:47.649 "r_mbytes_per_sec": 0, 00:09:47.649 "w_mbytes_per_sec": 0 00:09:47.649 }, 00:09:47.649 "claimed": false, 00:09:47.649 "zoned": false, 00:09:47.649 "supported_io_types": { 00:09:47.649 "read": true, 00:09:47.649 "write": true, 00:09:47.649 "unmap": true, 00:09:47.649 "flush": true, 00:09:47.649 "reset": true, 00:09:47.649 "nvme_admin": false, 00:09:47.649 "nvme_io": false, 00:09:47.649 "nvme_io_md": false, 00:09:47.649 "write_zeroes": true, 00:09:47.649 "zcopy": false, 00:09:47.649 "get_zone_info": false, 00:09:47.649 "zone_management": false, 00:09:47.649 "zone_append": false, 00:09:47.649 "compare": false, 00:09:47.649 "compare_and_write": false, 00:09:47.649 "abort": false, 00:09:47.649 "seek_hole": false, 00:09:47.649 "seek_data": false, 00:09:47.649 "copy": false, 00:09:47.649 "nvme_iov_md": false 00:09:47.649 }, 00:09:47.649 "memory_domains": [ 00:09:47.649 { 00:09:47.649 "dma_device_id": "system", 00:09:47.649 "dma_device_type": 1 00:09:47.649 }, 00:09:47.649 { 00:09:47.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.649 "dma_device_type": 2 00:09:47.649 }, 00:09:47.649 { 00:09:47.649 "dma_device_id": "system", 00:09:47.650 "dma_device_type": 1 00:09:47.650 }, 00:09:47.650 { 00:09:47.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.650 "dma_device_type": 2 00:09:47.650 }, 00:09:47.650 { 00:09:47.650 "dma_device_id": "system", 00:09:47.650 "dma_device_type": 1 00:09:47.650 }, 00:09:47.650 { 00:09:47.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.650 "dma_device_type": 2 00:09:47.650 } 00:09:47.650 ], 00:09:47.650 "driver_specific": { 00:09:47.650 "raid": { 00:09:47.650 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:47.650 "strip_size_kb": 64, 00:09:47.650 "state": "online", 00:09:47.650 "raid_level": "concat", 00:09:47.650 "superblock": true, 00:09:47.650 "num_base_bdevs": 3, 00:09:47.650 "num_base_bdevs_discovered": 3, 00:09:47.650 "num_base_bdevs_operational": 3, 00:09:47.650 "base_bdevs_list": [ 00:09:47.650 { 00:09:47.650 "name": "pt1", 00:09:47.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.650 "is_configured": true, 00:09:47.650 "data_offset": 2048, 00:09:47.650 "data_size": 63488 00:09:47.650 }, 00:09:47.650 { 00:09:47.650 "name": "pt2", 00:09:47.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.650 "is_configured": true, 00:09:47.650 "data_offset": 2048, 00:09:47.650 "data_size": 63488 00:09:47.650 }, 00:09:47.650 { 00:09:47.650 "name": "pt3", 00:09:47.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.650 "is_configured": true, 00:09:47.650 "data_offset": 2048, 00:09:47.650 "data_size": 63488 00:09:47.650 } 00:09:47.650 ] 00:09:47.650 } 00:09:47.650 } 00:09:47.650 }' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.650 pt2 00:09:47.650 pt3' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.650 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 [2024-09-28 16:11:02.370594] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6803915a-7588-45e6-aace-f645cba72d1d 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6803915a-7588-45e6-aace-f645cba72d1d ']' 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 [2024-09-28 16:11:02.406300] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.910 [2024-09-28 16:11:02.406338] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.910 [2024-09-28 16:11:02.406402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.910 [2024-09-28 16:11:02.406463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.910 [2024-09-28 16:11:02.406475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.910 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 [2024-09-28 16:11:02.554076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:47.911 [2024-09-28 16:11:02.556206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:47.911 [2024-09-28 16:11:02.556274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:47.911 [2024-09-28 16:11:02.556321] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:47.911 [2024-09-28 16:11:02.556370] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:47.911 [2024-09-28 16:11:02.556389] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:47.911 [2024-09-28 16:11:02.556420] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.911 [2024-09-28 16:11:02.556429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:47.911 request: 00:09:47.911 { 00:09:47.911 "name": "raid_bdev1", 00:09:47.911 "raid_level": "concat", 00:09:47.911 "base_bdevs": [ 00:09:47.911 "malloc1", 00:09:47.911 "malloc2", 00:09:47.911 "malloc3" 00:09:47.911 ], 00:09:47.911 "strip_size_kb": 64, 00:09:47.911 "superblock": false, 00:09:47.911 "method": "bdev_raid_create", 00:09:47.911 "req_id": 1 00:09:47.911 } 00:09:47.911 Got JSON-RPC error response 00:09:47.911 response: 00:09:47.911 { 00:09:47.911 "code": -17, 00:09:47.911 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:47.911 } 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 [2024-09-28 16:11:02.621922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.171 [2024-09-28 16:11:02.622007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.171 [2024-09-28 16:11:02.622043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:48.171 [2024-09-28 16:11:02.622070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.171 [2024-09-28 16:11:02.624485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.171 [2024-09-28 16:11:02.624553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.171 [2024-09-28 16:11:02.624643] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.171 [2024-09-28 16:11:02.624734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.171 pt1 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.171 "name": "raid_bdev1", 00:09:48.171 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:48.171 "strip_size_kb": 64, 00:09:48.171 "state": "configuring", 00:09:48.171 "raid_level": "concat", 00:09:48.171 "superblock": true, 00:09:48.171 "num_base_bdevs": 3, 00:09:48.171 "num_base_bdevs_discovered": 1, 00:09:48.171 "num_base_bdevs_operational": 3, 00:09:48.171 "base_bdevs_list": [ 00:09:48.171 { 00:09:48.171 "name": "pt1", 00:09:48.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.171 "is_configured": true, 00:09:48.171 "data_offset": 2048, 00:09:48.171 "data_size": 63488 00:09:48.171 }, 00:09:48.171 { 00:09:48.171 "name": null, 00:09:48.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.171 "is_configured": false, 00:09:48.171 "data_offset": 2048, 00:09:48.171 "data_size": 63488 00:09:48.171 }, 00:09:48.171 { 00:09:48.171 "name": null, 00:09:48.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.171 "is_configured": false, 00:09:48.171 "data_offset": 2048, 00:09:48.171 "data_size": 63488 00:09:48.171 } 00:09:48.171 ] 00:09:48.171 }' 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.171 16:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.431 [2024-09-28 16:11:03.061149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.431 [2024-09-28 16:11:03.061203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.431 [2024-09-28 16:11:03.061238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:48.431 [2024-09-28 16:11:03.061248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.431 [2024-09-28 16:11:03.061655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.431 [2024-09-28 16:11:03.061677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.431 [2024-09-28 16:11:03.061747] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.431 [2024-09-28 16:11:03.061766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.431 pt2 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.431 [2024-09-28 16:11:03.073157] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.431 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.691 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.691 "name": "raid_bdev1", 00:09:48.691 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:48.691 "strip_size_kb": 64, 00:09:48.691 "state": "configuring", 00:09:48.691 "raid_level": "concat", 00:09:48.691 "superblock": true, 00:09:48.691 "num_base_bdevs": 3, 00:09:48.691 "num_base_bdevs_discovered": 1, 00:09:48.691 "num_base_bdevs_operational": 3, 00:09:48.691 "base_bdevs_list": [ 00:09:48.691 { 00:09:48.691 "name": "pt1", 00:09:48.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.691 "is_configured": true, 00:09:48.691 "data_offset": 2048, 00:09:48.691 "data_size": 63488 00:09:48.691 }, 00:09:48.691 { 00:09:48.691 "name": null, 00:09:48.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.691 "is_configured": false, 00:09:48.691 "data_offset": 0, 00:09:48.691 "data_size": 63488 00:09:48.691 }, 00:09:48.691 { 00:09:48.691 "name": null, 00:09:48.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.691 "is_configured": false, 00:09:48.691 "data_offset": 2048, 00:09:48.691 "data_size": 63488 00:09:48.691 } 00:09:48.691 ] 00:09:48.691 }' 00:09:48.691 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.691 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.951 [2024-09-28 16:11:03.536341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.951 [2024-09-28 16:11:03.536463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.951 [2024-09-28 16:11:03.536498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:48.951 [2024-09-28 16:11:03.536529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.951 [2024-09-28 16:11:03.536989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.951 [2024-09-28 16:11:03.537046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.951 [2024-09-28 16:11:03.537151] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.951 [2024-09-28 16:11:03.537219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.951 pt2 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.951 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.952 [2024-09-28 16:11:03.548338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.952 [2024-09-28 16:11:03.548431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.952 [2024-09-28 16:11:03.548460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:48.952 [2024-09-28 16:11:03.548488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.952 [2024-09-28 16:11:03.548899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.952 [2024-09-28 16:11:03.548959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.952 [2024-09-28 16:11:03.549044] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:48.952 [2024-09-28 16:11:03.549091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.952 [2024-09-28 16:11:03.549245] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:48.952 [2024-09-28 16:11:03.549262] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:48.952 [2024-09-28 16:11:03.549530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:48.952 [2024-09-28 16:11:03.549679] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:48.952 [2024-09-28 16:11:03.549688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:48.952 [2024-09-28 16:11:03.549813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.952 pt3 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.952 "name": "raid_bdev1", 00:09:48.952 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:48.952 "strip_size_kb": 64, 00:09:48.952 "state": "online", 00:09:48.952 "raid_level": "concat", 00:09:48.952 "superblock": true, 00:09:48.952 "num_base_bdevs": 3, 00:09:48.952 "num_base_bdevs_discovered": 3, 00:09:48.952 "num_base_bdevs_operational": 3, 00:09:48.952 "base_bdevs_list": [ 00:09:48.952 { 00:09:48.952 "name": "pt1", 00:09:48.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.952 "is_configured": true, 00:09:48.952 "data_offset": 2048, 00:09:48.952 "data_size": 63488 00:09:48.952 }, 00:09:48.952 { 00:09:48.952 "name": "pt2", 00:09:48.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.952 "is_configured": true, 00:09:48.952 "data_offset": 2048, 00:09:48.952 "data_size": 63488 00:09:48.952 }, 00:09:48.952 { 00:09:48.952 "name": "pt3", 00:09:48.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.952 "is_configured": true, 00:09:48.952 "data_offset": 2048, 00:09:48.952 "data_size": 63488 00:09:48.952 } 00:09:48.952 ] 00:09:48.952 }' 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.952 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.521 16:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.521 [2024-09-28 16:11:03.987808] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.521 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.521 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.521 "name": "raid_bdev1", 00:09:49.521 "aliases": [ 00:09:49.521 "6803915a-7588-45e6-aace-f645cba72d1d" 00:09:49.521 ], 00:09:49.521 "product_name": "Raid Volume", 00:09:49.521 "block_size": 512, 00:09:49.521 "num_blocks": 190464, 00:09:49.521 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:49.521 "assigned_rate_limits": { 00:09:49.521 "rw_ios_per_sec": 0, 00:09:49.521 "rw_mbytes_per_sec": 0, 00:09:49.521 "r_mbytes_per_sec": 0, 00:09:49.521 "w_mbytes_per_sec": 0 00:09:49.521 }, 00:09:49.521 "claimed": false, 00:09:49.521 "zoned": false, 00:09:49.521 "supported_io_types": { 00:09:49.521 "read": true, 00:09:49.521 "write": true, 00:09:49.521 "unmap": true, 00:09:49.522 "flush": true, 00:09:49.522 "reset": true, 00:09:49.522 "nvme_admin": false, 00:09:49.522 "nvme_io": false, 00:09:49.522 "nvme_io_md": false, 00:09:49.522 "write_zeroes": true, 00:09:49.522 "zcopy": false, 00:09:49.522 "get_zone_info": false, 00:09:49.522 "zone_management": false, 00:09:49.522 "zone_append": false, 00:09:49.522 "compare": false, 00:09:49.522 "compare_and_write": false, 00:09:49.522 "abort": false, 00:09:49.522 "seek_hole": false, 00:09:49.522 "seek_data": false, 00:09:49.522 "copy": false, 00:09:49.522 "nvme_iov_md": false 00:09:49.522 }, 00:09:49.522 "memory_domains": [ 00:09:49.522 { 00:09:49.522 "dma_device_id": "system", 00:09:49.522 "dma_device_type": 1 00:09:49.522 }, 00:09:49.522 { 00:09:49.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.522 "dma_device_type": 2 00:09:49.522 }, 00:09:49.522 { 00:09:49.522 "dma_device_id": "system", 00:09:49.522 "dma_device_type": 1 00:09:49.522 }, 00:09:49.522 { 00:09:49.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.522 "dma_device_type": 2 00:09:49.522 }, 00:09:49.522 { 00:09:49.522 "dma_device_id": "system", 00:09:49.522 "dma_device_type": 1 00:09:49.522 }, 00:09:49.522 { 00:09:49.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.522 "dma_device_type": 2 00:09:49.522 } 00:09:49.522 ], 00:09:49.522 "driver_specific": { 00:09:49.522 "raid": { 00:09:49.522 "uuid": "6803915a-7588-45e6-aace-f645cba72d1d", 00:09:49.522 "strip_size_kb": 64, 00:09:49.522 "state": "online", 00:09:49.522 "raid_level": "concat", 00:09:49.522 "superblock": true, 00:09:49.522 "num_base_bdevs": 3, 00:09:49.522 "num_base_bdevs_discovered": 3, 00:09:49.522 "num_base_bdevs_operational": 3, 00:09:49.522 "base_bdevs_list": [ 00:09:49.522 { 00:09:49.522 "name": "pt1", 00:09:49.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.522 "is_configured": true, 00:09:49.522 "data_offset": 2048, 00:09:49.522 "data_size": 63488 00:09:49.522 }, 00:09:49.522 { 00:09:49.522 "name": "pt2", 00:09:49.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.522 "is_configured": true, 00:09:49.522 "data_offset": 2048, 00:09:49.522 "data_size": 63488 00:09:49.522 }, 00:09:49.522 { 00:09:49.522 "name": "pt3", 00:09:49.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.522 "is_configured": true, 00:09:49.522 "data_offset": 2048, 00:09:49.522 "data_size": 63488 00:09:49.522 } 00:09:49.522 ] 00:09:49.522 } 00:09:49.522 } 00:09:49.522 }' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.522 pt2 00:09:49.522 pt3' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.522 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:49.782 [2024-09-28 16:11:04.283304] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6803915a-7588-45e6-aace-f645cba72d1d '!=' 6803915a-7588-45e6-aace-f645cba72d1d ']' 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66868 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66868 ']' 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66868 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66868 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66868' 00:09:49.782 killing process with pid 66868 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66868 00:09:49.782 [2024-09-28 16:11:04.372863] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.782 [2024-09-28 16:11:04.372996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.782 [2024-09-28 16:11:04.373080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.782 16:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66868 00:09:49.782 [2024-09-28 16:11:04.373129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:50.042 [2024-09-28 16:11:04.691759] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.424 16:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:51.424 00:09:51.424 real 0m5.418s 00:09:51.424 user 0m7.523s 00:09:51.424 sys 0m1.032s 00:09:51.424 16:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.424 16:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.424 ************************************ 00:09:51.424 END TEST raid_superblock_test 00:09:51.424 ************************************ 00:09:51.424 16:11:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:51.424 16:11:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.424 16:11:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.424 16:11:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.424 ************************************ 00:09:51.424 START TEST raid_read_error_test 00:09:51.424 ************************************ 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L1x3m0sjxj 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67127 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67127 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67127 ']' 00:09:51.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.424 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.686 [2024-09-28 16:11:06.185746] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:51.686 [2024-09-28 16:11:06.185935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67127 ] 00:09:51.686 [2024-09-28 16:11:06.350259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.945 [2024-09-28 16:11:06.590202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.203 [2024-09-28 16:11:06.819915] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.203 [2024-09-28 16:11:06.820031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.462 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.462 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.462 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.462 16:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.463 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.463 16:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 BaseBdev1_malloc 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 true 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 [2024-09-28 16:11:07.064505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.463 [2024-09-28 16:11:07.064567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.463 [2024-09-28 16:11:07.064584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.463 [2024-09-28 16:11:07.064595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.463 [2024-09-28 16:11:07.066907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.463 [2024-09-28 16:11:07.066959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.463 BaseBdev1 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 BaseBdev2_malloc 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 true 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.463 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 [2024-09-28 16:11:07.147100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.721 [2024-09-28 16:11:07.147154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.721 [2024-09-28 16:11:07.147186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.722 [2024-09-28 16:11:07.147197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.722 [2024-09-28 16:11:07.149557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.722 [2024-09-28 16:11:07.149594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.722 BaseBdev2 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 BaseBdev3_malloc 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 true 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 [2024-09-28 16:11:07.220860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.722 [2024-09-28 16:11:07.220913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.722 [2024-09-28 16:11:07.220930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.722 [2024-09-28 16:11:07.220941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.722 [2024-09-28 16:11:07.223243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.722 [2024-09-28 16:11:07.223354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.722 BaseBdev3 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 [2024-09-28 16:11:07.232918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.722 [2024-09-28 16:11:07.234969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.722 [2024-09-28 16:11:07.235070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.722 [2024-09-28 16:11:07.235275] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.722 [2024-09-28 16:11:07.235287] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:52.722 [2024-09-28 16:11:07.235534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.722 [2024-09-28 16:11:07.235711] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.722 [2024-09-28 16:11:07.235724] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:52.722 [2024-09-28 16:11:07.235868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.722 "name": "raid_bdev1", 00:09:52.722 "uuid": "da08c69c-c9b2-4b86-872a-8743d2b65d07", 00:09:52.722 "strip_size_kb": 64, 00:09:52.722 "state": "online", 00:09:52.722 "raid_level": "concat", 00:09:52.722 "superblock": true, 00:09:52.722 "num_base_bdevs": 3, 00:09:52.722 "num_base_bdevs_discovered": 3, 00:09:52.722 "num_base_bdevs_operational": 3, 00:09:52.722 "base_bdevs_list": [ 00:09:52.722 { 00:09:52.722 "name": "BaseBdev1", 00:09:52.722 "uuid": "7fa5772f-dcfc-5143-a002-f3f797fd1129", 00:09:52.722 "is_configured": true, 00:09:52.722 "data_offset": 2048, 00:09:52.722 "data_size": 63488 00:09:52.722 }, 00:09:52.722 { 00:09:52.722 "name": "BaseBdev2", 00:09:52.722 "uuid": "e47880a2-5274-5719-9eff-99936da335a7", 00:09:52.722 "is_configured": true, 00:09:52.722 "data_offset": 2048, 00:09:52.722 "data_size": 63488 00:09:52.722 }, 00:09:52.722 { 00:09:52.722 "name": "BaseBdev3", 00:09:52.722 "uuid": "79c8150b-ea98-5740-89f0-523e8d9ea3d0", 00:09:52.722 "is_configured": true, 00:09:52.722 "data_offset": 2048, 00:09:52.722 "data_size": 63488 00:09:52.722 } 00:09:52.722 ] 00:09:52.722 }' 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.722 16:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.290 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.290 16:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.290 [2024-09-28 16:11:07.797292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.229 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.230 16:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.230 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.230 16:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.230 16:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.230 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.230 "name": "raid_bdev1", 00:09:54.230 "uuid": "da08c69c-c9b2-4b86-872a-8743d2b65d07", 00:09:54.230 "strip_size_kb": 64, 00:09:54.230 "state": "online", 00:09:54.230 "raid_level": "concat", 00:09:54.230 "superblock": true, 00:09:54.230 "num_base_bdevs": 3, 00:09:54.230 "num_base_bdevs_discovered": 3, 00:09:54.230 "num_base_bdevs_operational": 3, 00:09:54.230 "base_bdevs_list": [ 00:09:54.230 { 00:09:54.230 "name": "BaseBdev1", 00:09:54.230 "uuid": "7fa5772f-dcfc-5143-a002-f3f797fd1129", 00:09:54.230 "is_configured": true, 00:09:54.230 "data_offset": 2048, 00:09:54.230 "data_size": 63488 00:09:54.230 }, 00:09:54.230 { 00:09:54.230 "name": "BaseBdev2", 00:09:54.230 "uuid": "e47880a2-5274-5719-9eff-99936da335a7", 00:09:54.230 "is_configured": true, 00:09:54.230 "data_offset": 2048, 00:09:54.230 "data_size": 63488 00:09:54.230 }, 00:09:54.230 { 00:09:54.230 "name": "BaseBdev3", 00:09:54.230 "uuid": "79c8150b-ea98-5740-89f0-523e8d9ea3d0", 00:09:54.230 "is_configured": true, 00:09:54.230 "data_offset": 2048, 00:09:54.230 "data_size": 63488 00:09:54.230 } 00:09:54.230 ] 00:09:54.230 }' 00:09:54.230 16:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.230 16:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.799 [2024-09-28 16:11:09.206523] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.799 [2024-09-28 16:11:09.206623] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.799 [2024-09-28 16:11:09.209298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.799 [2024-09-28 16:11:09.209347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.799 [2024-09-28 16:11:09.209388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.799 [2024-09-28 16:11:09.209398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:54.799 { 00:09:54.799 "results": [ 00:09:54.799 { 00:09:54.799 "job": "raid_bdev1", 00:09:54.799 "core_mask": "0x1", 00:09:54.799 "workload": "randrw", 00:09:54.799 "percentage": 50, 00:09:54.799 "status": "finished", 00:09:54.799 "queue_depth": 1, 00:09:54.799 "io_size": 131072, 00:09:54.799 "runtime": 1.409984, 00:09:54.799 "iops": 14603.711815169534, 00:09:54.799 "mibps": 1825.4639768961918, 00:09:54.799 "io_failed": 1, 00:09:54.799 "io_timeout": 0, 00:09:54.799 "avg_latency_us": 96.37099785222055, 00:09:54.799 "min_latency_us": 24.593886462882097, 00:09:54.799 "max_latency_us": 1380.8349344978167 00:09:54.799 } 00:09:54.799 ], 00:09:54.799 "core_count": 1 00:09:54.799 } 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67127 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67127 ']' 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67127 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67127 00:09:54.799 killing process with pid 67127 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67127' 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67127 00:09:54.799 [2024-09-28 16:11:09.252801] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.799 16:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67127 00:09:55.059 [2024-09-28 16:11:09.486350] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L1x3m0sjxj 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:56.441 00:09:56.441 real 0m4.793s 00:09:56.441 user 0m5.564s 00:09:56.441 sys 0m0.673s 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.441 ************************************ 00:09:56.441 END TEST raid_read_error_test 00:09:56.441 ************************************ 00:09:56.441 16:11:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.441 16:11:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:56.441 16:11:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:56.441 16:11:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.441 16:11:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.441 ************************************ 00:09:56.441 START TEST raid_write_error_test 00:09:56.441 ************************************ 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8ZgGSrPrpw 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67278 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67278 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67278 ']' 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.441 16:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.441 [2024-09-28 16:11:11.055326] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:56.441 [2024-09-28 16:11:11.055523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67278 ] 00:09:56.701 [2024-09-28 16:11:11.220845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.961 [2024-09-28 16:11:11.446827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.221 [2024-09-28 16:11:11.655622] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.221 [2024-09-28 16:11:11.655795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.221 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.221 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:57.221 16:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.221 16:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.221 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.221 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 BaseBdev1_malloc 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 true 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 [2024-09-28 16:11:11.935136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.481 [2024-09-28 16:11:11.935265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.481 [2024-09-28 16:11:11.935301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:57.481 [2024-09-28 16:11:11.935333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.481 [2024-09-28 16:11:11.937852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.481 [2024-09-28 16:11:11.937929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.481 BaseBdev1 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 BaseBdev2_malloc 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 true 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 [2024-09-28 16:11:12.024094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.481 [2024-09-28 16:11:12.024150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.481 [2024-09-28 16:11:12.024167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:57.481 [2024-09-28 16:11:12.024178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.481 [2024-09-28 16:11:12.026540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.481 [2024-09-28 16:11:12.026613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.481 BaseBdev2 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 BaseBdev3_malloc 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 true 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 [2024-09-28 16:11:12.097576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:57.481 [2024-09-28 16:11:12.097629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.481 [2024-09-28 16:11:12.097645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:57.481 [2024-09-28 16:11:12.097655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.481 [2024-09-28 16:11:12.100060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.481 [2024-09-28 16:11:12.100151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:57.481 BaseBdev3 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 [2024-09-28 16:11:12.109649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.481 [2024-09-28 16:11:12.111734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.481 [2024-09-28 16:11:12.111812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.481 [2024-09-28 16:11:12.112005] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.481 [2024-09-28 16:11:12.112018] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:57.481 [2024-09-28 16:11:12.112294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:57.481 [2024-09-28 16:11:12.112441] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.481 [2024-09-28 16:11:12.112452] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:57.481 [2024-09-28 16:11:12.112585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.481 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.482 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.740 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.740 "name": "raid_bdev1", 00:09:57.740 "uuid": "2e956315-aa6f-46db-8363-0e52216bba5b", 00:09:57.740 "strip_size_kb": 64, 00:09:57.740 "state": "online", 00:09:57.740 "raid_level": "concat", 00:09:57.740 "superblock": true, 00:09:57.740 "num_base_bdevs": 3, 00:09:57.740 "num_base_bdevs_discovered": 3, 00:09:57.740 "num_base_bdevs_operational": 3, 00:09:57.740 "base_bdevs_list": [ 00:09:57.740 { 00:09:57.740 "name": "BaseBdev1", 00:09:57.740 "uuid": "909faf0d-d402-5d3b-be12-12ecde181f00", 00:09:57.740 "is_configured": true, 00:09:57.740 "data_offset": 2048, 00:09:57.740 "data_size": 63488 00:09:57.740 }, 00:09:57.740 { 00:09:57.740 "name": "BaseBdev2", 00:09:57.740 "uuid": "02162eb7-aa37-5899-a24e-dd2416359f63", 00:09:57.740 "is_configured": true, 00:09:57.740 "data_offset": 2048, 00:09:57.740 "data_size": 63488 00:09:57.740 }, 00:09:57.740 { 00:09:57.740 "name": "BaseBdev3", 00:09:57.740 "uuid": "3bc2596f-f335-51bc-ad88-e113840db0aa", 00:09:57.740 "is_configured": true, 00:09:57.740 "data_offset": 2048, 00:09:57.740 "data_size": 63488 00:09:57.740 } 00:09:57.740 ] 00:09:57.740 }' 00:09:57.740 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.740 16:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.000 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.000 16:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.000 [2024-09-28 16:11:12.618056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.940 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.941 "name": "raid_bdev1", 00:09:58.941 "uuid": "2e956315-aa6f-46db-8363-0e52216bba5b", 00:09:58.941 "strip_size_kb": 64, 00:09:58.941 "state": "online", 00:09:58.941 "raid_level": "concat", 00:09:58.941 "superblock": true, 00:09:58.941 "num_base_bdevs": 3, 00:09:58.941 "num_base_bdevs_discovered": 3, 00:09:58.941 "num_base_bdevs_operational": 3, 00:09:58.941 "base_bdevs_list": [ 00:09:58.941 { 00:09:58.941 "name": "BaseBdev1", 00:09:58.941 "uuid": "909faf0d-d402-5d3b-be12-12ecde181f00", 00:09:58.941 "is_configured": true, 00:09:58.941 "data_offset": 2048, 00:09:58.941 "data_size": 63488 00:09:58.941 }, 00:09:58.941 { 00:09:58.941 "name": "BaseBdev2", 00:09:58.941 "uuid": "02162eb7-aa37-5899-a24e-dd2416359f63", 00:09:58.941 "is_configured": true, 00:09:58.941 "data_offset": 2048, 00:09:58.941 "data_size": 63488 00:09:58.941 }, 00:09:58.941 { 00:09:58.941 "name": "BaseBdev3", 00:09:58.941 "uuid": "3bc2596f-f335-51bc-ad88-e113840db0aa", 00:09:58.941 "is_configured": true, 00:09:58.941 "data_offset": 2048, 00:09:58.941 "data_size": 63488 00:09:58.941 } 00:09:58.941 ] 00:09:58.941 }' 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.941 16:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.509 [2024-09-28 16:11:14.034727] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.509 [2024-09-28 16:11:14.034810] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.509 [2024-09-28 16:11:14.037448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.509 [2024-09-28 16:11:14.037551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.509 [2024-09-28 16:11:14.037612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.509 [2024-09-28 16:11:14.037655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:59.509 { 00:09:59.509 "results": [ 00:09:59.509 { 00:09:59.509 "job": "raid_bdev1", 00:09:59.509 "core_mask": "0x1", 00:09:59.509 "workload": "randrw", 00:09:59.509 "percentage": 50, 00:09:59.509 "status": "finished", 00:09:59.509 "queue_depth": 1, 00:09:59.509 "io_size": 131072, 00:09:59.509 "runtime": 1.417489, 00:09:59.509 "iops": 14745.08796893662, 00:09:59.509 "mibps": 1843.1359961170774, 00:09:59.509 "io_failed": 1, 00:09:59.509 "io_timeout": 0, 00:09:59.509 "avg_latency_us": 95.45852364057848, 00:09:59.509 "min_latency_us": 24.817467248908297, 00:09:59.509 "max_latency_us": 1352.216593886463 00:09:59.509 } 00:09:59.509 ], 00:09:59.509 "core_count": 1 00:09:59.509 } 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67278 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67278 ']' 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67278 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67278 00:09:59.509 killing process with pid 67278 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67278' 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67278 00:09:59.509 [2024-09-28 16:11:14.082392] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.509 16:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67278 00:09:59.769 [2024-09-28 16:11:14.325627] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8ZgGSrPrpw 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.150 ************************************ 00:10:01.150 END TEST raid_write_error_test 00:10:01.150 ************************************ 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:01.150 00:10:01.150 real 0m4.770s 00:10:01.150 user 0m5.487s 00:10:01.150 sys 0m0.686s 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.150 16:11:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.150 16:11:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:01.150 16:11:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:01.150 16:11:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:01.150 16:11:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.150 16:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.150 ************************************ 00:10:01.150 START TEST raid_state_function_test 00:10:01.150 ************************************ 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67416 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67416' 00:10:01.150 Process raid pid: 67416 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67416 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67416 ']' 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.150 16:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.409 [2024-09-28 16:11:15.898025] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:01.409 [2024-09-28 16:11:15.898273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.409 [2024-09-28 16:11:16.066589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.668 [2024-09-28 16:11:16.308112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.987 [2024-09-28 16:11:16.549314] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.987 [2024-09-28 16:11:16.549448] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.247 [2024-09-28 16:11:16.722028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.247 [2024-09-28 16:11:16.722175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.247 [2024-09-28 16:11:16.722207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.247 [2024-09-28 16:11:16.722241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.247 [2024-09-28 16:11:16.722264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.247 [2024-09-28 16:11:16.722303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.247 "name": "Existed_Raid", 00:10:02.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.247 "strip_size_kb": 0, 00:10:02.247 "state": "configuring", 00:10:02.247 "raid_level": "raid1", 00:10:02.247 "superblock": false, 00:10:02.247 "num_base_bdevs": 3, 00:10:02.247 "num_base_bdevs_discovered": 0, 00:10:02.247 "num_base_bdevs_operational": 3, 00:10:02.247 "base_bdevs_list": [ 00:10:02.247 { 00:10:02.247 "name": "BaseBdev1", 00:10:02.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.247 "is_configured": false, 00:10:02.247 "data_offset": 0, 00:10:02.247 "data_size": 0 00:10:02.247 }, 00:10:02.247 { 00:10:02.247 "name": "BaseBdev2", 00:10:02.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.247 "is_configured": false, 00:10:02.247 "data_offset": 0, 00:10:02.247 "data_size": 0 00:10:02.247 }, 00:10:02.247 { 00:10:02.247 "name": "BaseBdev3", 00:10:02.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.247 "is_configured": false, 00:10:02.247 "data_offset": 0, 00:10:02.247 "data_size": 0 00:10:02.247 } 00:10:02.247 ] 00:10:02.247 }' 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.247 16:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 [2024-09-28 16:11:17.209093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.817 [2024-09-28 16:11:17.209203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 [2024-09-28 16:11:17.221084] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.817 [2024-09-28 16:11:17.221183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.817 [2024-09-28 16:11:17.221210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.817 [2024-09-28 16:11:17.221270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.817 [2024-09-28 16:11:17.221307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.817 [2024-09-28 16:11:17.221337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 [2024-09-28 16:11:17.303648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.817 BaseBdev1 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 [ 00:10:02.817 { 00:10:02.817 "name": "BaseBdev1", 00:10:02.817 "aliases": [ 00:10:02.817 "c789bc87-1ec2-4687-8dea-aa99d13c8a0c" 00:10:02.817 ], 00:10:02.817 "product_name": "Malloc disk", 00:10:02.817 "block_size": 512, 00:10:02.817 "num_blocks": 65536, 00:10:02.817 "uuid": "c789bc87-1ec2-4687-8dea-aa99d13c8a0c", 00:10:02.817 "assigned_rate_limits": { 00:10:02.817 "rw_ios_per_sec": 0, 00:10:02.817 "rw_mbytes_per_sec": 0, 00:10:02.817 "r_mbytes_per_sec": 0, 00:10:02.817 "w_mbytes_per_sec": 0 00:10:02.817 }, 00:10:02.817 "claimed": true, 00:10:02.817 "claim_type": "exclusive_write", 00:10:02.817 "zoned": false, 00:10:02.817 "supported_io_types": { 00:10:02.817 "read": true, 00:10:02.817 "write": true, 00:10:02.817 "unmap": true, 00:10:02.817 "flush": true, 00:10:02.817 "reset": true, 00:10:02.817 "nvme_admin": false, 00:10:02.817 "nvme_io": false, 00:10:02.817 "nvme_io_md": false, 00:10:02.817 "write_zeroes": true, 00:10:02.817 "zcopy": true, 00:10:02.817 "get_zone_info": false, 00:10:02.817 "zone_management": false, 00:10:02.817 "zone_append": false, 00:10:02.817 "compare": false, 00:10:02.817 "compare_and_write": false, 00:10:02.817 "abort": true, 00:10:02.817 "seek_hole": false, 00:10:02.817 "seek_data": false, 00:10:02.817 "copy": true, 00:10:02.817 "nvme_iov_md": false 00:10:02.817 }, 00:10:02.817 "memory_domains": [ 00:10:02.817 { 00:10:02.817 "dma_device_id": "system", 00:10:02.817 "dma_device_type": 1 00:10:02.817 }, 00:10:02.817 { 00:10:02.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.817 "dma_device_type": 2 00:10:02.817 } 00:10:02.817 ], 00:10:02.817 "driver_specific": {} 00:10:02.817 } 00:10:02.817 ] 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.817 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.818 "name": "Existed_Raid", 00:10:02.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.818 "strip_size_kb": 0, 00:10:02.818 "state": "configuring", 00:10:02.818 "raid_level": "raid1", 00:10:02.818 "superblock": false, 00:10:02.818 "num_base_bdevs": 3, 00:10:02.818 "num_base_bdevs_discovered": 1, 00:10:02.818 "num_base_bdevs_operational": 3, 00:10:02.818 "base_bdevs_list": [ 00:10:02.818 { 00:10:02.818 "name": "BaseBdev1", 00:10:02.818 "uuid": "c789bc87-1ec2-4687-8dea-aa99d13c8a0c", 00:10:02.818 "is_configured": true, 00:10:02.818 "data_offset": 0, 00:10:02.818 "data_size": 65536 00:10:02.818 }, 00:10:02.818 { 00:10:02.818 "name": "BaseBdev2", 00:10:02.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.818 "is_configured": false, 00:10:02.818 "data_offset": 0, 00:10:02.818 "data_size": 0 00:10:02.818 }, 00:10:02.818 { 00:10:02.818 "name": "BaseBdev3", 00:10:02.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.818 "is_configured": false, 00:10:02.818 "data_offset": 0, 00:10:02.818 "data_size": 0 00:10:02.818 } 00:10:02.818 ] 00:10:02.818 }' 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.818 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.385 [2024-09-28 16:11:17.810790] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.385 [2024-09-28 16:11:17.810905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.385 [2024-09-28 16:11:17.822817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.385 [2024-09-28 16:11:17.824908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.385 [2024-09-28 16:11:17.825001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.385 [2024-09-28 16:11:17.825030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.385 [2024-09-28 16:11:17.825052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.385 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.385 "name": "Existed_Raid", 00:10:03.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.385 "strip_size_kb": 0, 00:10:03.385 "state": "configuring", 00:10:03.385 "raid_level": "raid1", 00:10:03.385 "superblock": false, 00:10:03.386 "num_base_bdevs": 3, 00:10:03.386 "num_base_bdevs_discovered": 1, 00:10:03.386 "num_base_bdevs_operational": 3, 00:10:03.386 "base_bdevs_list": [ 00:10:03.386 { 00:10:03.386 "name": "BaseBdev1", 00:10:03.386 "uuid": "c789bc87-1ec2-4687-8dea-aa99d13c8a0c", 00:10:03.386 "is_configured": true, 00:10:03.386 "data_offset": 0, 00:10:03.386 "data_size": 65536 00:10:03.386 }, 00:10:03.386 { 00:10:03.386 "name": "BaseBdev2", 00:10:03.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.386 "is_configured": false, 00:10:03.386 "data_offset": 0, 00:10:03.386 "data_size": 0 00:10:03.386 }, 00:10:03.386 { 00:10:03.386 "name": "BaseBdev3", 00:10:03.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.386 "is_configured": false, 00:10:03.386 "data_offset": 0, 00:10:03.386 "data_size": 0 00:10:03.386 } 00:10:03.386 ] 00:10:03.386 }' 00:10:03.386 16:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.386 16:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.648 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.648 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.648 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.907 [2024-09-28 16:11:18.337558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.907 BaseBdev2 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.907 [ 00:10:03.907 { 00:10:03.907 "name": "BaseBdev2", 00:10:03.907 "aliases": [ 00:10:03.907 "a4545b3b-657b-402a-8be7-29809988591a" 00:10:03.907 ], 00:10:03.907 "product_name": "Malloc disk", 00:10:03.907 "block_size": 512, 00:10:03.907 "num_blocks": 65536, 00:10:03.907 "uuid": "a4545b3b-657b-402a-8be7-29809988591a", 00:10:03.907 "assigned_rate_limits": { 00:10:03.907 "rw_ios_per_sec": 0, 00:10:03.907 "rw_mbytes_per_sec": 0, 00:10:03.907 "r_mbytes_per_sec": 0, 00:10:03.907 "w_mbytes_per_sec": 0 00:10:03.907 }, 00:10:03.907 "claimed": true, 00:10:03.907 "claim_type": "exclusive_write", 00:10:03.907 "zoned": false, 00:10:03.907 "supported_io_types": { 00:10:03.907 "read": true, 00:10:03.907 "write": true, 00:10:03.907 "unmap": true, 00:10:03.907 "flush": true, 00:10:03.907 "reset": true, 00:10:03.907 "nvme_admin": false, 00:10:03.907 "nvme_io": false, 00:10:03.907 "nvme_io_md": false, 00:10:03.907 "write_zeroes": true, 00:10:03.907 "zcopy": true, 00:10:03.907 "get_zone_info": false, 00:10:03.907 "zone_management": false, 00:10:03.907 "zone_append": false, 00:10:03.907 "compare": false, 00:10:03.907 "compare_and_write": false, 00:10:03.907 "abort": true, 00:10:03.907 "seek_hole": false, 00:10:03.907 "seek_data": false, 00:10:03.907 "copy": true, 00:10:03.907 "nvme_iov_md": false 00:10:03.907 }, 00:10:03.907 "memory_domains": [ 00:10:03.907 { 00:10:03.907 "dma_device_id": "system", 00:10:03.907 "dma_device_type": 1 00:10:03.907 }, 00:10:03.907 { 00:10:03.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.907 "dma_device_type": 2 00:10:03.907 } 00:10:03.907 ], 00:10:03.907 "driver_specific": {} 00:10:03.907 } 00:10:03.907 ] 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.907 "name": "Existed_Raid", 00:10:03.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.907 "strip_size_kb": 0, 00:10:03.907 "state": "configuring", 00:10:03.907 "raid_level": "raid1", 00:10:03.907 "superblock": false, 00:10:03.907 "num_base_bdevs": 3, 00:10:03.907 "num_base_bdevs_discovered": 2, 00:10:03.907 "num_base_bdevs_operational": 3, 00:10:03.907 "base_bdevs_list": [ 00:10:03.907 { 00:10:03.907 "name": "BaseBdev1", 00:10:03.907 "uuid": "c789bc87-1ec2-4687-8dea-aa99d13c8a0c", 00:10:03.907 "is_configured": true, 00:10:03.907 "data_offset": 0, 00:10:03.907 "data_size": 65536 00:10:03.907 }, 00:10:03.907 { 00:10:03.907 "name": "BaseBdev2", 00:10:03.907 "uuid": "a4545b3b-657b-402a-8be7-29809988591a", 00:10:03.907 "is_configured": true, 00:10:03.907 "data_offset": 0, 00:10:03.907 "data_size": 65536 00:10:03.907 }, 00:10:03.907 { 00:10:03.907 "name": "BaseBdev3", 00:10:03.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.907 "is_configured": false, 00:10:03.907 "data_offset": 0, 00:10:03.907 "data_size": 0 00:10:03.907 } 00:10:03.907 ] 00:10:03.907 }' 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.907 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.167 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.167 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.167 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.167 [2024-09-28 16:11:18.848465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.167 [2024-09-28 16:11:18.848599] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.167 [2024-09-28 16:11:18.848640] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:04.167 [2024-09-28 16:11:18.848977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.167 BaseBdev3 00:10:04.167 [2024-09-28 16:11:18.849214] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.167 [2024-09-28 16:11:18.849238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.167 [2024-09-28 16:11:18.849518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.427 [ 00:10:04.427 { 00:10:04.427 "name": "BaseBdev3", 00:10:04.427 "aliases": [ 00:10:04.427 "2b9edbdb-e89d-4ca7-8328-7216375c6442" 00:10:04.427 ], 00:10:04.427 "product_name": "Malloc disk", 00:10:04.427 "block_size": 512, 00:10:04.427 "num_blocks": 65536, 00:10:04.427 "uuid": "2b9edbdb-e89d-4ca7-8328-7216375c6442", 00:10:04.427 "assigned_rate_limits": { 00:10:04.427 "rw_ios_per_sec": 0, 00:10:04.427 "rw_mbytes_per_sec": 0, 00:10:04.427 "r_mbytes_per_sec": 0, 00:10:04.427 "w_mbytes_per_sec": 0 00:10:04.427 }, 00:10:04.427 "claimed": true, 00:10:04.427 "claim_type": "exclusive_write", 00:10:04.427 "zoned": false, 00:10:04.427 "supported_io_types": { 00:10:04.427 "read": true, 00:10:04.427 "write": true, 00:10:04.427 "unmap": true, 00:10:04.427 "flush": true, 00:10:04.427 "reset": true, 00:10:04.427 "nvme_admin": false, 00:10:04.427 "nvme_io": false, 00:10:04.427 "nvme_io_md": false, 00:10:04.427 "write_zeroes": true, 00:10:04.427 "zcopy": true, 00:10:04.427 "get_zone_info": false, 00:10:04.427 "zone_management": false, 00:10:04.427 "zone_append": false, 00:10:04.427 "compare": false, 00:10:04.427 "compare_and_write": false, 00:10:04.427 "abort": true, 00:10:04.427 "seek_hole": false, 00:10:04.427 "seek_data": false, 00:10:04.427 "copy": true, 00:10:04.427 "nvme_iov_md": false 00:10:04.427 }, 00:10:04.427 "memory_domains": [ 00:10:04.427 { 00:10:04.427 "dma_device_id": "system", 00:10:04.427 "dma_device_type": 1 00:10:04.427 }, 00:10:04.427 { 00:10:04.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.427 "dma_device_type": 2 00:10:04.427 } 00:10:04.427 ], 00:10:04.427 "driver_specific": {} 00:10:04.427 } 00:10:04.427 ] 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.427 "name": "Existed_Raid", 00:10:04.427 "uuid": "11dfc27c-881c-41c6-8628-8590b0169368", 00:10:04.427 "strip_size_kb": 0, 00:10:04.427 "state": "online", 00:10:04.427 "raid_level": "raid1", 00:10:04.427 "superblock": false, 00:10:04.427 "num_base_bdevs": 3, 00:10:04.427 "num_base_bdevs_discovered": 3, 00:10:04.427 "num_base_bdevs_operational": 3, 00:10:04.427 "base_bdevs_list": [ 00:10:04.427 { 00:10:04.427 "name": "BaseBdev1", 00:10:04.427 "uuid": "c789bc87-1ec2-4687-8dea-aa99d13c8a0c", 00:10:04.427 "is_configured": true, 00:10:04.427 "data_offset": 0, 00:10:04.427 "data_size": 65536 00:10:04.427 }, 00:10:04.427 { 00:10:04.427 "name": "BaseBdev2", 00:10:04.427 "uuid": "a4545b3b-657b-402a-8be7-29809988591a", 00:10:04.427 "is_configured": true, 00:10:04.427 "data_offset": 0, 00:10:04.427 "data_size": 65536 00:10:04.427 }, 00:10:04.427 { 00:10:04.427 "name": "BaseBdev3", 00:10:04.427 "uuid": "2b9edbdb-e89d-4ca7-8328-7216375c6442", 00:10:04.427 "is_configured": true, 00:10:04.427 "data_offset": 0, 00:10:04.427 "data_size": 65536 00:10:04.427 } 00:10:04.427 ] 00:10:04.427 }' 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.427 16:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.686 [2024-09-28 16:11:19.347973] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.686 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.946 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.946 "name": "Existed_Raid", 00:10:04.946 "aliases": [ 00:10:04.946 "11dfc27c-881c-41c6-8628-8590b0169368" 00:10:04.946 ], 00:10:04.946 "product_name": "Raid Volume", 00:10:04.946 "block_size": 512, 00:10:04.946 "num_blocks": 65536, 00:10:04.946 "uuid": "11dfc27c-881c-41c6-8628-8590b0169368", 00:10:04.946 "assigned_rate_limits": { 00:10:04.946 "rw_ios_per_sec": 0, 00:10:04.946 "rw_mbytes_per_sec": 0, 00:10:04.946 "r_mbytes_per_sec": 0, 00:10:04.946 "w_mbytes_per_sec": 0 00:10:04.946 }, 00:10:04.946 "claimed": false, 00:10:04.946 "zoned": false, 00:10:04.946 "supported_io_types": { 00:10:04.946 "read": true, 00:10:04.946 "write": true, 00:10:04.946 "unmap": false, 00:10:04.946 "flush": false, 00:10:04.946 "reset": true, 00:10:04.946 "nvme_admin": false, 00:10:04.946 "nvme_io": false, 00:10:04.946 "nvme_io_md": false, 00:10:04.946 "write_zeroes": true, 00:10:04.946 "zcopy": false, 00:10:04.946 "get_zone_info": false, 00:10:04.946 "zone_management": false, 00:10:04.946 "zone_append": false, 00:10:04.946 "compare": false, 00:10:04.946 "compare_and_write": false, 00:10:04.946 "abort": false, 00:10:04.946 "seek_hole": false, 00:10:04.946 "seek_data": false, 00:10:04.946 "copy": false, 00:10:04.946 "nvme_iov_md": false 00:10:04.946 }, 00:10:04.946 "memory_domains": [ 00:10:04.946 { 00:10:04.946 "dma_device_id": "system", 00:10:04.946 "dma_device_type": 1 00:10:04.946 }, 00:10:04.946 { 00:10:04.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.946 "dma_device_type": 2 00:10:04.946 }, 00:10:04.946 { 00:10:04.946 "dma_device_id": "system", 00:10:04.946 "dma_device_type": 1 00:10:04.946 }, 00:10:04.946 { 00:10:04.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.946 "dma_device_type": 2 00:10:04.946 }, 00:10:04.946 { 00:10:04.946 "dma_device_id": "system", 00:10:04.946 "dma_device_type": 1 00:10:04.946 }, 00:10:04.946 { 00:10:04.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.946 "dma_device_type": 2 00:10:04.946 } 00:10:04.946 ], 00:10:04.946 "driver_specific": { 00:10:04.946 "raid": { 00:10:04.946 "uuid": "11dfc27c-881c-41c6-8628-8590b0169368", 00:10:04.946 "strip_size_kb": 0, 00:10:04.946 "state": "online", 00:10:04.946 "raid_level": "raid1", 00:10:04.946 "superblock": false, 00:10:04.946 "num_base_bdevs": 3, 00:10:04.946 "num_base_bdevs_discovered": 3, 00:10:04.946 "num_base_bdevs_operational": 3, 00:10:04.946 "base_bdevs_list": [ 00:10:04.946 { 00:10:04.946 "name": "BaseBdev1", 00:10:04.946 "uuid": "c789bc87-1ec2-4687-8dea-aa99d13c8a0c", 00:10:04.946 "is_configured": true, 00:10:04.946 "data_offset": 0, 00:10:04.946 "data_size": 65536 00:10:04.946 }, 00:10:04.946 { 00:10:04.946 "name": "BaseBdev2", 00:10:04.946 "uuid": "a4545b3b-657b-402a-8be7-29809988591a", 00:10:04.946 "is_configured": true, 00:10:04.946 "data_offset": 0, 00:10:04.946 "data_size": 65536 00:10:04.946 }, 00:10:04.946 { 00:10:04.946 "name": "BaseBdev3", 00:10:04.947 "uuid": "2b9edbdb-e89d-4ca7-8328-7216375c6442", 00:10:04.947 "is_configured": true, 00:10:04.947 "data_offset": 0, 00:10:04.947 "data_size": 65536 00:10:04.947 } 00:10:04.947 ] 00:10:04.947 } 00:10:04.947 } 00:10:04.947 }' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:04.947 BaseBdev2 00:10:04.947 BaseBdev3' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.947 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 [2024-09-28 16:11:19.619214] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.207 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.208 "name": "Existed_Raid", 00:10:05.208 "uuid": "11dfc27c-881c-41c6-8628-8590b0169368", 00:10:05.208 "strip_size_kb": 0, 00:10:05.208 "state": "online", 00:10:05.208 "raid_level": "raid1", 00:10:05.208 "superblock": false, 00:10:05.208 "num_base_bdevs": 3, 00:10:05.208 "num_base_bdevs_discovered": 2, 00:10:05.208 "num_base_bdevs_operational": 2, 00:10:05.208 "base_bdevs_list": [ 00:10:05.208 { 00:10:05.208 "name": null, 00:10:05.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.208 "is_configured": false, 00:10:05.208 "data_offset": 0, 00:10:05.208 "data_size": 65536 00:10:05.208 }, 00:10:05.208 { 00:10:05.208 "name": "BaseBdev2", 00:10:05.208 "uuid": "a4545b3b-657b-402a-8be7-29809988591a", 00:10:05.208 "is_configured": true, 00:10:05.208 "data_offset": 0, 00:10:05.208 "data_size": 65536 00:10:05.208 }, 00:10:05.208 { 00:10:05.208 "name": "BaseBdev3", 00:10:05.208 "uuid": "2b9edbdb-e89d-4ca7-8328-7216375c6442", 00:10:05.208 "is_configured": true, 00:10:05.208 "data_offset": 0, 00:10:05.208 "data_size": 65536 00:10:05.208 } 00:10:05.208 ] 00:10:05.208 }' 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.208 16:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.776 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.777 [2024-09-28 16:11:20.224443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.777 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.777 [2024-09-28 16:11:20.383690] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.777 [2024-09-28 16:11:20.383876] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.037 [2024-09-28 16:11:20.483714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.037 [2024-09-28 16:11:20.483860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.037 [2024-09-28 16:11:20.483905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.037 BaseBdev2 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.037 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.037 [ 00:10:06.037 { 00:10:06.037 "name": "BaseBdev2", 00:10:06.037 "aliases": [ 00:10:06.037 "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd" 00:10:06.037 ], 00:10:06.038 "product_name": "Malloc disk", 00:10:06.038 "block_size": 512, 00:10:06.038 "num_blocks": 65536, 00:10:06.038 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:06.038 "assigned_rate_limits": { 00:10:06.038 "rw_ios_per_sec": 0, 00:10:06.038 "rw_mbytes_per_sec": 0, 00:10:06.038 "r_mbytes_per_sec": 0, 00:10:06.038 "w_mbytes_per_sec": 0 00:10:06.038 }, 00:10:06.038 "claimed": false, 00:10:06.038 "zoned": false, 00:10:06.038 "supported_io_types": { 00:10:06.038 "read": true, 00:10:06.038 "write": true, 00:10:06.038 "unmap": true, 00:10:06.038 "flush": true, 00:10:06.038 "reset": true, 00:10:06.038 "nvme_admin": false, 00:10:06.038 "nvme_io": false, 00:10:06.038 "nvme_io_md": false, 00:10:06.038 "write_zeroes": true, 00:10:06.038 "zcopy": true, 00:10:06.038 "get_zone_info": false, 00:10:06.038 "zone_management": false, 00:10:06.038 "zone_append": false, 00:10:06.038 "compare": false, 00:10:06.038 "compare_and_write": false, 00:10:06.038 "abort": true, 00:10:06.038 "seek_hole": false, 00:10:06.038 "seek_data": false, 00:10:06.038 "copy": true, 00:10:06.038 "nvme_iov_md": false 00:10:06.038 }, 00:10:06.038 "memory_domains": [ 00:10:06.038 { 00:10:06.038 "dma_device_id": "system", 00:10:06.038 "dma_device_type": 1 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.038 "dma_device_type": 2 00:10:06.038 } 00:10:06.038 ], 00:10:06.038 "driver_specific": {} 00:10:06.038 } 00:10:06.038 ] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.038 BaseBdev3 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.038 [ 00:10:06.038 { 00:10:06.038 "name": "BaseBdev3", 00:10:06.038 "aliases": [ 00:10:06.038 "eed084d9-4f33-4a84-a645-7f3ef3fc36c5" 00:10:06.038 ], 00:10:06.038 "product_name": "Malloc disk", 00:10:06.038 "block_size": 512, 00:10:06.038 "num_blocks": 65536, 00:10:06.038 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:06.038 "assigned_rate_limits": { 00:10:06.038 "rw_ios_per_sec": 0, 00:10:06.038 "rw_mbytes_per_sec": 0, 00:10:06.038 "r_mbytes_per_sec": 0, 00:10:06.038 "w_mbytes_per_sec": 0 00:10:06.038 }, 00:10:06.038 "claimed": false, 00:10:06.038 "zoned": false, 00:10:06.038 "supported_io_types": { 00:10:06.038 "read": true, 00:10:06.038 "write": true, 00:10:06.038 "unmap": true, 00:10:06.038 "flush": true, 00:10:06.038 "reset": true, 00:10:06.038 "nvme_admin": false, 00:10:06.038 "nvme_io": false, 00:10:06.038 "nvme_io_md": false, 00:10:06.038 "write_zeroes": true, 00:10:06.038 "zcopy": true, 00:10:06.038 "get_zone_info": false, 00:10:06.038 "zone_management": false, 00:10:06.038 "zone_append": false, 00:10:06.038 "compare": false, 00:10:06.038 "compare_and_write": false, 00:10:06.038 "abort": true, 00:10:06.038 "seek_hole": false, 00:10:06.038 "seek_data": false, 00:10:06.038 "copy": true, 00:10:06.038 "nvme_iov_md": false 00:10:06.038 }, 00:10:06.038 "memory_domains": [ 00:10:06.038 { 00:10:06.038 "dma_device_id": "system", 00:10:06.038 "dma_device_type": 1 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.038 "dma_device_type": 2 00:10:06.038 } 00:10:06.038 ], 00:10:06.038 "driver_specific": {} 00:10:06.038 } 00:10:06.038 ] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.038 [2024-09-28 16:11:20.702515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.038 [2024-09-28 16:11:20.702620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.038 [2024-09-28 16:11:20.702679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.038 [2024-09-28 16:11:20.704806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.038 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.298 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.298 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.298 "name": "Existed_Raid", 00:10:06.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.298 "strip_size_kb": 0, 00:10:06.298 "state": "configuring", 00:10:06.298 "raid_level": "raid1", 00:10:06.298 "superblock": false, 00:10:06.298 "num_base_bdevs": 3, 00:10:06.298 "num_base_bdevs_discovered": 2, 00:10:06.298 "num_base_bdevs_operational": 3, 00:10:06.298 "base_bdevs_list": [ 00:10:06.298 { 00:10:06.298 "name": "BaseBdev1", 00:10:06.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.298 "is_configured": false, 00:10:06.298 "data_offset": 0, 00:10:06.298 "data_size": 0 00:10:06.298 }, 00:10:06.298 { 00:10:06.298 "name": "BaseBdev2", 00:10:06.298 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:06.298 "is_configured": true, 00:10:06.298 "data_offset": 0, 00:10:06.298 "data_size": 65536 00:10:06.298 }, 00:10:06.298 { 00:10:06.298 "name": "BaseBdev3", 00:10:06.298 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:06.298 "is_configured": true, 00:10:06.298 "data_offset": 0, 00:10:06.298 "data_size": 65536 00:10:06.298 } 00:10:06.298 ] 00:10:06.298 }' 00:10:06.298 16:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.298 16:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.558 [2024-09-28 16:11:21.101776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.558 "name": "Existed_Raid", 00:10:06.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.558 "strip_size_kb": 0, 00:10:06.558 "state": "configuring", 00:10:06.558 "raid_level": "raid1", 00:10:06.558 "superblock": false, 00:10:06.558 "num_base_bdevs": 3, 00:10:06.558 "num_base_bdevs_discovered": 1, 00:10:06.558 "num_base_bdevs_operational": 3, 00:10:06.558 "base_bdevs_list": [ 00:10:06.558 { 00:10:06.558 "name": "BaseBdev1", 00:10:06.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.558 "is_configured": false, 00:10:06.558 "data_offset": 0, 00:10:06.558 "data_size": 0 00:10:06.558 }, 00:10:06.558 { 00:10:06.558 "name": null, 00:10:06.558 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:06.558 "is_configured": false, 00:10:06.558 "data_offset": 0, 00:10:06.558 "data_size": 65536 00:10:06.558 }, 00:10:06.558 { 00:10:06.558 "name": "BaseBdev3", 00:10:06.558 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:06.558 "is_configured": true, 00:10:06.558 "data_offset": 0, 00:10:06.558 "data_size": 65536 00:10:06.558 } 00:10:06.558 ] 00:10:06.558 }' 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.558 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.128 [2024-09-28 16:11:21.634657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.128 BaseBdev1 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.128 [ 00:10:07.128 { 00:10:07.128 "name": "BaseBdev1", 00:10:07.128 "aliases": [ 00:10:07.128 "23c48f0e-020a-4fb3-ba70-6b5034936dcf" 00:10:07.128 ], 00:10:07.128 "product_name": "Malloc disk", 00:10:07.128 "block_size": 512, 00:10:07.128 "num_blocks": 65536, 00:10:07.128 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:07.128 "assigned_rate_limits": { 00:10:07.128 "rw_ios_per_sec": 0, 00:10:07.128 "rw_mbytes_per_sec": 0, 00:10:07.128 "r_mbytes_per_sec": 0, 00:10:07.128 "w_mbytes_per_sec": 0 00:10:07.128 }, 00:10:07.128 "claimed": true, 00:10:07.128 "claim_type": "exclusive_write", 00:10:07.128 "zoned": false, 00:10:07.128 "supported_io_types": { 00:10:07.128 "read": true, 00:10:07.128 "write": true, 00:10:07.128 "unmap": true, 00:10:07.128 "flush": true, 00:10:07.128 "reset": true, 00:10:07.128 "nvme_admin": false, 00:10:07.128 "nvme_io": false, 00:10:07.128 "nvme_io_md": false, 00:10:07.128 "write_zeroes": true, 00:10:07.128 "zcopy": true, 00:10:07.128 "get_zone_info": false, 00:10:07.128 "zone_management": false, 00:10:07.128 "zone_append": false, 00:10:07.128 "compare": false, 00:10:07.128 "compare_and_write": false, 00:10:07.128 "abort": true, 00:10:07.128 "seek_hole": false, 00:10:07.128 "seek_data": false, 00:10:07.128 "copy": true, 00:10:07.128 "nvme_iov_md": false 00:10:07.128 }, 00:10:07.128 "memory_domains": [ 00:10:07.128 { 00:10:07.128 "dma_device_id": "system", 00:10:07.128 "dma_device_type": 1 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.128 "dma_device_type": 2 00:10:07.128 } 00:10:07.128 ], 00:10:07.128 "driver_specific": {} 00:10:07.128 } 00:10:07.128 ] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.128 "name": "Existed_Raid", 00:10:07.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.128 "strip_size_kb": 0, 00:10:07.128 "state": "configuring", 00:10:07.128 "raid_level": "raid1", 00:10:07.128 "superblock": false, 00:10:07.128 "num_base_bdevs": 3, 00:10:07.128 "num_base_bdevs_discovered": 2, 00:10:07.128 "num_base_bdevs_operational": 3, 00:10:07.128 "base_bdevs_list": [ 00:10:07.128 { 00:10:07.128 "name": "BaseBdev1", 00:10:07.128 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:07.128 "is_configured": true, 00:10:07.128 "data_offset": 0, 00:10:07.128 "data_size": 65536 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "name": null, 00:10:07.128 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:07.128 "is_configured": false, 00:10:07.128 "data_offset": 0, 00:10:07.128 "data_size": 65536 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "name": "BaseBdev3", 00:10:07.128 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:07.128 "is_configured": true, 00:10:07.128 "data_offset": 0, 00:10:07.128 "data_size": 65536 00:10:07.128 } 00:10:07.128 ] 00:10:07.128 }' 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.128 16:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.388 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.388 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.388 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.388 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 [2024-09-28 16:11:22.117876] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.648 "name": "Existed_Raid", 00:10:07.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.648 "strip_size_kb": 0, 00:10:07.648 "state": "configuring", 00:10:07.648 "raid_level": "raid1", 00:10:07.648 "superblock": false, 00:10:07.648 "num_base_bdevs": 3, 00:10:07.648 "num_base_bdevs_discovered": 1, 00:10:07.648 "num_base_bdevs_operational": 3, 00:10:07.648 "base_bdevs_list": [ 00:10:07.648 { 00:10:07.648 "name": "BaseBdev1", 00:10:07.648 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:07.648 "is_configured": true, 00:10:07.648 "data_offset": 0, 00:10:07.648 "data_size": 65536 00:10:07.648 }, 00:10:07.648 { 00:10:07.648 "name": null, 00:10:07.648 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:07.648 "is_configured": false, 00:10:07.648 "data_offset": 0, 00:10:07.648 "data_size": 65536 00:10:07.648 }, 00:10:07.648 { 00:10:07.648 "name": null, 00:10:07.648 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:07.648 "is_configured": false, 00:10:07.648 "data_offset": 0, 00:10:07.648 "data_size": 65536 00:10:07.648 } 00:10:07.648 ] 00:10:07.648 }' 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.648 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.907 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.907 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.907 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.907 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.907 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.166 [2024-09-28 16:11:22.617017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.166 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.166 "name": "Existed_Raid", 00:10:08.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.166 "strip_size_kb": 0, 00:10:08.166 "state": "configuring", 00:10:08.166 "raid_level": "raid1", 00:10:08.166 "superblock": false, 00:10:08.166 "num_base_bdevs": 3, 00:10:08.166 "num_base_bdevs_discovered": 2, 00:10:08.166 "num_base_bdevs_operational": 3, 00:10:08.166 "base_bdevs_list": [ 00:10:08.166 { 00:10:08.166 "name": "BaseBdev1", 00:10:08.166 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:08.166 "is_configured": true, 00:10:08.166 "data_offset": 0, 00:10:08.166 "data_size": 65536 00:10:08.166 }, 00:10:08.166 { 00:10:08.166 "name": null, 00:10:08.166 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:08.166 "is_configured": false, 00:10:08.166 "data_offset": 0, 00:10:08.167 "data_size": 65536 00:10:08.167 }, 00:10:08.167 { 00:10:08.167 "name": "BaseBdev3", 00:10:08.167 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:08.167 "is_configured": true, 00:10:08.167 "data_offset": 0, 00:10:08.167 "data_size": 65536 00:10:08.167 } 00:10:08.167 ] 00:10:08.167 }' 00:10:08.167 16:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.167 16:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.426 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.426 [2024-09-28 16:11:23.108255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.685 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.685 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.685 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.685 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.685 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.685 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.685 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.686 "name": "Existed_Raid", 00:10:08.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.686 "strip_size_kb": 0, 00:10:08.686 "state": "configuring", 00:10:08.686 "raid_level": "raid1", 00:10:08.686 "superblock": false, 00:10:08.686 "num_base_bdevs": 3, 00:10:08.686 "num_base_bdevs_discovered": 1, 00:10:08.686 "num_base_bdevs_operational": 3, 00:10:08.686 "base_bdevs_list": [ 00:10:08.686 { 00:10:08.686 "name": null, 00:10:08.686 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:08.686 "is_configured": false, 00:10:08.686 "data_offset": 0, 00:10:08.686 "data_size": 65536 00:10:08.686 }, 00:10:08.686 { 00:10:08.686 "name": null, 00:10:08.686 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:08.686 "is_configured": false, 00:10:08.686 "data_offset": 0, 00:10:08.686 "data_size": 65536 00:10:08.686 }, 00:10:08.686 { 00:10:08.686 "name": "BaseBdev3", 00:10:08.686 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:08.686 "is_configured": true, 00:10:08.686 "data_offset": 0, 00:10:08.686 "data_size": 65536 00:10:08.686 } 00:10:08.686 ] 00:10:08.686 }' 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.686 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.255 [2024-09-28 16:11:23.718929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.255 "name": "Existed_Raid", 00:10:09.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.255 "strip_size_kb": 0, 00:10:09.255 "state": "configuring", 00:10:09.255 "raid_level": "raid1", 00:10:09.255 "superblock": false, 00:10:09.255 "num_base_bdevs": 3, 00:10:09.255 "num_base_bdevs_discovered": 2, 00:10:09.255 "num_base_bdevs_operational": 3, 00:10:09.255 "base_bdevs_list": [ 00:10:09.255 { 00:10:09.255 "name": null, 00:10:09.255 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:09.255 "is_configured": false, 00:10:09.255 "data_offset": 0, 00:10:09.255 "data_size": 65536 00:10:09.255 }, 00:10:09.255 { 00:10:09.255 "name": "BaseBdev2", 00:10:09.255 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:09.255 "is_configured": true, 00:10:09.255 "data_offset": 0, 00:10:09.255 "data_size": 65536 00:10:09.255 }, 00:10:09.255 { 00:10:09.255 "name": "BaseBdev3", 00:10:09.255 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:09.255 "is_configured": true, 00:10:09.255 "data_offset": 0, 00:10:09.255 "data_size": 65536 00:10:09.255 } 00:10:09.255 ] 00:10:09.255 }' 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.255 16:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.514 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.514 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.514 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.514 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.514 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 23c48f0e-020a-4fb3-ba70-6b5034936dcf 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.773 [2024-09-28 16:11:24.299902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.773 [2024-09-28 16:11:24.300046] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.773 [2024-09-28 16:11:24.300059] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:09.773 [2024-09-28 16:11:24.300378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:09.773 [2024-09-28 16:11:24.300564] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.773 [2024-09-28 16:11:24.300577] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:09.773 [2024-09-28 16:11:24.300826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.773 NewBaseBdev 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.773 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.773 [ 00:10:09.773 { 00:10:09.773 "name": "NewBaseBdev", 00:10:09.773 "aliases": [ 00:10:09.773 "23c48f0e-020a-4fb3-ba70-6b5034936dcf" 00:10:09.773 ], 00:10:09.773 "product_name": "Malloc disk", 00:10:09.773 "block_size": 512, 00:10:09.773 "num_blocks": 65536, 00:10:09.773 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:09.773 "assigned_rate_limits": { 00:10:09.773 "rw_ios_per_sec": 0, 00:10:09.773 "rw_mbytes_per_sec": 0, 00:10:09.773 "r_mbytes_per_sec": 0, 00:10:09.773 "w_mbytes_per_sec": 0 00:10:09.773 }, 00:10:09.773 "claimed": true, 00:10:09.773 "claim_type": "exclusive_write", 00:10:09.773 "zoned": false, 00:10:09.773 "supported_io_types": { 00:10:09.773 "read": true, 00:10:09.773 "write": true, 00:10:09.773 "unmap": true, 00:10:09.773 "flush": true, 00:10:09.773 "reset": true, 00:10:09.773 "nvme_admin": false, 00:10:09.773 "nvme_io": false, 00:10:09.773 "nvme_io_md": false, 00:10:09.773 "write_zeroes": true, 00:10:09.773 "zcopy": true, 00:10:09.773 "get_zone_info": false, 00:10:09.773 "zone_management": false, 00:10:09.774 "zone_append": false, 00:10:09.774 "compare": false, 00:10:09.774 "compare_and_write": false, 00:10:09.774 "abort": true, 00:10:09.774 "seek_hole": false, 00:10:09.774 "seek_data": false, 00:10:09.774 "copy": true, 00:10:09.774 "nvme_iov_md": false 00:10:09.774 }, 00:10:09.774 "memory_domains": [ 00:10:09.774 { 00:10:09.774 "dma_device_id": "system", 00:10:09.774 "dma_device_type": 1 00:10:09.774 }, 00:10:09.774 { 00:10:09.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.774 "dma_device_type": 2 00:10:09.774 } 00:10:09.774 ], 00:10:09.774 "driver_specific": {} 00:10:09.774 } 00:10:09.774 ] 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.774 "name": "Existed_Raid", 00:10:09.774 "uuid": "c4f7c2d7-8146-45dc-9e56-2a7096e85f69", 00:10:09.774 "strip_size_kb": 0, 00:10:09.774 "state": "online", 00:10:09.774 "raid_level": "raid1", 00:10:09.774 "superblock": false, 00:10:09.774 "num_base_bdevs": 3, 00:10:09.774 "num_base_bdevs_discovered": 3, 00:10:09.774 "num_base_bdevs_operational": 3, 00:10:09.774 "base_bdevs_list": [ 00:10:09.774 { 00:10:09.774 "name": "NewBaseBdev", 00:10:09.774 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:09.774 "is_configured": true, 00:10:09.774 "data_offset": 0, 00:10:09.774 "data_size": 65536 00:10:09.774 }, 00:10:09.774 { 00:10:09.774 "name": "BaseBdev2", 00:10:09.774 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:09.774 "is_configured": true, 00:10:09.774 "data_offset": 0, 00:10:09.774 "data_size": 65536 00:10:09.774 }, 00:10:09.774 { 00:10:09.774 "name": "BaseBdev3", 00:10:09.774 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:09.774 "is_configured": true, 00:10:09.774 "data_offset": 0, 00:10:09.774 "data_size": 65536 00:10:09.774 } 00:10:09.774 ] 00:10:09.774 }' 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.774 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.342 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.342 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.342 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.342 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.343 [2024-09-28 16:11:24.819295] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.343 "name": "Existed_Raid", 00:10:10.343 "aliases": [ 00:10:10.343 "c4f7c2d7-8146-45dc-9e56-2a7096e85f69" 00:10:10.343 ], 00:10:10.343 "product_name": "Raid Volume", 00:10:10.343 "block_size": 512, 00:10:10.343 "num_blocks": 65536, 00:10:10.343 "uuid": "c4f7c2d7-8146-45dc-9e56-2a7096e85f69", 00:10:10.343 "assigned_rate_limits": { 00:10:10.343 "rw_ios_per_sec": 0, 00:10:10.343 "rw_mbytes_per_sec": 0, 00:10:10.343 "r_mbytes_per_sec": 0, 00:10:10.343 "w_mbytes_per_sec": 0 00:10:10.343 }, 00:10:10.343 "claimed": false, 00:10:10.343 "zoned": false, 00:10:10.343 "supported_io_types": { 00:10:10.343 "read": true, 00:10:10.343 "write": true, 00:10:10.343 "unmap": false, 00:10:10.343 "flush": false, 00:10:10.343 "reset": true, 00:10:10.343 "nvme_admin": false, 00:10:10.343 "nvme_io": false, 00:10:10.343 "nvme_io_md": false, 00:10:10.343 "write_zeroes": true, 00:10:10.343 "zcopy": false, 00:10:10.343 "get_zone_info": false, 00:10:10.343 "zone_management": false, 00:10:10.343 "zone_append": false, 00:10:10.343 "compare": false, 00:10:10.343 "compare_and_write": false, 00:10:10.343 "abort": false, 00:10:10.343 "seek_hole": false, 00:10:10.343 "seek_data": false, 00:10:10.343 "copy": false, 00:10:10.343 "nvme_iov_md": false 00:10:10.343 }, 00:10:10.343 "memory_domains": [ 00:10:10.343 { 00:10:10.343 "dma_device_id": "system", 00:10:10.343 "dma_device_type": 1 00:10:10.343 }, 00:10:10.343 { 00:10:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.343 "dma_device_type": 2 00:10:10.343 }, 00:10:10.343 { 00:10:10.343 "dma_device_id": "system", 00:10:10.343 "dma_device_type": 1 00:10:10.343 }, 00:10:10.343 { 00:10:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.343 "dma_device_type": 2 00:10:10.343 }, 00:10:10.343 { 00:10:10.343 "dma_device_id": "system", 00:10:10.343 "dma_device_type": 1 00:10:10.343 }, 00:10:10.343 { 00:10:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.343 "dma_device_type": 2 00:10:10.343 } 00:10:10.343 ], 00:10:10.343 "driver_specific": { 00:10:10.343 "raid": { 00:10:10.343 "uuid": "c4f7c2d7-8146-45dc-9e56-2a7096e85f69", 00:10:10.343 "strip_size_kb": 0, 00:10:10.343 "state": "online", 00:10:10.343 "raid_level": "raid1", 00:10:10.343 "superblock": false, 00:10:10.343 "num_base_bdevs": 3, 00:10:10.343 "num_base_bdevs_discovered": 3, 00:10:10.343 "num_base_bdevs_operational": 3, 00:10:10.343 "base_bdevs_list": [ 00:10:10.343 { 00:10:10.343 "name": "NewBaseBdev", 00:10:10.343 "uuid": "23c48f0e-020a-4fb3-ba70-6b5034936dcf", 00:10:10.343 "is_configured": true, 00:10:10.343 "data_offset": 0, 00:10:10.343 "data_size": 65536 00:10:10.343 }, 00:10:10.343 { 00:10:10.343 "name": "BaseBdev2", 00:10:10.343 "uuid": "bb4147a0-a43d-4c6c-ab07-6e57cb2315cd", 00:10:10.343 "is_configured": true, 00:10:10.343 "data_offset": 0, 00:10:10.343 "data_size": 65536 00:10:10.343 }, 00:10:10.343 { 00:10:10.343 "name": "BaseBdev3", 00:10:10.343 "uuid": "eed084d9-4f33-4a84-a645-7f3ef3fc36c5", 00:10:10.343 "is_configured": true, 00:10:10.343 "data_offset": 0, 00:10:10.343 "data_size": 65536 00:10:10.343 } 00:10:10.343 ] 00:10:10.343 } 00:10:10.343 } 00:10:10.343 }' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.343 BaseBdev2 00:10:10.343 BaseBdev3' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.343 16:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.343 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.343 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.343 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.602 [2024-09-28 16:11:25.082562] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.602 [2024-09-28 16:11:25.082632] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.602 [2024-09-28 16:11:25.082715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.602 [2024-09-28 16:11:25.083059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.602 [2024-09-28 16:11:25.083113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67416 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67416 ']' 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67416 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67416 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67416' 00:10:10.602 killing process with pid 67416 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67416 00:10:10.602 [2024-09-28 16:11:25.133093] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.602 16:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67416 00:10:10.861 [2024-09-28 16:11:25.448995] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.237 ************************************ 00:10:12.237 END TEST raid_state_function_test 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.237 00:10:12.237 real 0m10.972s 00:10:12.237 user 0m17.180s 00:10:12.237 sys 0m2.054s 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.237 ************************************ 00:10:12.237 16:11:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:12.237 16:11:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:12.237 16:11:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.237 16:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.237 ************************************ 00:10:12.237 START TEST raid_state_function_test_sb 00:10:12.237 ************************************ 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68043 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68043' 00:10:12.237 Process raid pid: 68043 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68043 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68043 ']' 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.237 16:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.496 [2024-09-28 16:11:26.935193] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:12.496 [2024-09-28 16:11:26.935391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.496 [2024-09-28 16:11:27.100941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.755 [2024-09-28 16:11:27.341176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.015 [2024-09-28 16:11:27.573397] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.015 [2024-09-28 16:11:27.573531] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.274 [2024-09-28 16:11:27.762141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.274 [2024-09-28 16:11:27.762207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.274 [2024-09-28 16:11:27.762217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.274 [2024-09-28 16:11:27.762235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.274 [2024-09-28 16:11:27.762241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.274 [2024-09-28 16:11:27.762250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.274 "name": "Existed_Raid", 00:10:13.274 "uuid": "c31eefc3-fa09-4248-88d6-f0a7be53a49b", 00:10:13.274 "strip_size_kb": 0, 00:10:13.274 "state": "configuring", 00:10:13.274 "raid_level": "raid1", 00:10:13.274 "superblock": true, 00:10:13.274 "num_base_bdevs": 3, 00:10:13.274 "num_base_bdevs_discovered": 0, 00:10:13.274 "num_base_bdevs_operational": 3, 00:10:13.274 "base_bdevs_list": [ 00:10:13.274 { 00:10:13.274 "name": "BaseBdev1", 00:10:13.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.274 "is_configured": false, 00:10:13.274 "data_offset": 0, 00:10:13.274 "data_size": 0 00:10:13.274 }, 00:10:13.274 { 00:10:13.274 "name": "BaseBdev2", 00:10:13.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.274 "is_configured": false, 00:10:13.274 "data_offset": 0, 00:10:13.274 "data_size": 0 00:10:13.274 }, 00:10:13.274 { 00:10:13.274 "name": "BaseBdev3", 00:10:13.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.274 "is_configured": false, 00:10:13.274 "data_offset": 0, 00:10:13.274 "data_size": 0 00:10:13.274 } 00:10:13.274 ] 00:10:13.274 }' 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.274 16:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.533 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.533 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.533 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.533 [2024-09-28 16:11:28.217340] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.533 [2024-09-28 16:11:28.217429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.793 [2024-09-28 16:11:28.229362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.793 [2024-09-28 16:11:28.229443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.793 [2024-09-28 16:11:28.229484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.793 [2024-09-28 16:11:28.229508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.793 [2024-09-28 16:11:28.229525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.793 [2024-09-28 16:11:28.229546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.793 [2024-09-28 16:11:28.317085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.793 BaseBdev1 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.793 [ 00:10:13.793 { 00:10:13.793 "name": "BaseBdev1", 00:10:13.793 "aliases": [ 00:10:13.793 "c6dfcd39-5664-40b2-9b50-29948c91a087" 00:10:13.793 ], 00:10:13.793 "product_name": "Malloc disk", 00:10:13.793 "block_size": 512, 00:10:13.793 "num_blocks": 65536, 00:10:13.793 "uuid": "c6dfcd39-5664-40b2-9b50-29948c91a087", 00:10:13.793 "assigned_rate_limits": { 00:10:13.793 "rw_ios_per_sec": 0, 00:10:13.793 "rw_mbytes_per_sec": 0, 00:10:13.793 "r_mbytes_per_sec": 0, 00:10:13.793 "w_mbytes_per_sec": 0 00:10:13.793 }, 00:10:13.793 "claimed": true, 00:10:13.793 "claim_type": "exclusive_write", 00:10:13.793 "zoned": false, 00:10:13.793 "supported_io_types": { 00:10:13.793 "read": true, 00:10:13.793 "write": true, 00:10:13.793 "unmap": true, 00:10:13.793 "flush": true, 00:10:13.793 "reset": true, 00:10:13.793 "nvme_admin": false, 00:10:13.793 "nvme_io": false, 00:10:13.793 "nvme_io_md": false, 00:10:13.793 "write_zeroes": true, 00:10:13.793 "zcopy": true, 00:10:13.793 "get_zone_info": false, 00:10:13.793 "zone_management": false, 00:10:13.793 "zone_append": false, 00:10:13.793 "compare": false, 00:10:13.793 "compare_and_write": false, 00:10:13.793 "abort": true, 00:10:13.793 "seek_hole": false, 00:10:13.793 "seek_data": false, 00:10:13.793 "copy": true, 00:10:13.793 "nvme_iov_md": false 00:10:13.793 }, 00:10:13.793 "memory_domains": [ 00:10:13.793 { 00:10:13.793 "dma_device_id": "system", 00:10:13.793 "dma_device_type": 1 00:10:13.793 }, 00:10:13.793 { 00:10:13.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.793 "dma_device_type": 2 00:10:13.793 } 00:10:13.793 ], 00:10:13.793 "driver_specific": {} 00:10:13.793 } 00:10:13.793 ] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.793 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.793 "name": "Existed_Raid", 00:10:13.793 "uuid": "5e338325-48cd-4cec-af20-7d6dc861e342", 00:10:13.793 "strip_size_kb": 0, 00:10:13.793 "state": "configuring", 00:10:13.793 "raid_level": "raid1", 00:10:13.793 "superblock": true, 00:10:13.793 "num_base_bdevs": 3, 00:10:13.793 "num_base_bdevs_discovered": 1, 00:10:13.793 "num_base_bdevs_operational": 3, 00:10:13.793 "base_bdevs_list": [ 00:10:13.793 { 00:10:13.793 "name": "BaseBdev1", 00:10:13.793 "uuid": "c6dfcd39-5664-40b2-9b50-29948c91a087", 00:10:13.793 "is_configured": true, 00:10:13.793 "data_offset": 2048, 00:10:13.793 "data_size": 63488 00:10:13.793 }, 00:10:13.793 { 00:10:13.793 "name": "BaseBdev2", 00:10:13.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.793 "is_configured": false, 00:10:13.793 "data_offset": 0, 00:10:13.793 "data_size": 0 00:10:13.793 }, 00:10:13.794 { 00:10:13.794 "name": "BaseBdev3", 00:10:13.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.794 "is_configured": false, 00:10:13.794 "data_offset": 0, 00:10:13.794 "data_size": 0 00:10:13.794 } 00:10:13.794 ] 00:10:13.794 }' 00:10:13.794 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.794 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.362 [2024-09-28 16:11:28.844191] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.362 [2024-09-28 16:11:28.844307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.362 [2024-09-28 16:11:28.856227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.362 [2024-09-28 16:11:28.858390] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.362 [2024-09-28 16:11:28.858480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.362 [2024-09-28 16:11:28.858508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.362 [2024-09-28 16:11:28.858530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.362 "name": "Existed_Raid", 00:10:14.362 "uuid": "6e902d80-f455-4c4f-a77d-b9b77fa93fa9", 00:10:14.362 "strip_size_kb": 0, 00:10:14.362 "state": "configuring", 00:10:14.362 "raid_level": "raid1", 00:10:14.362 "superblock": true, 00:10:14.362 "num_base_bdevs": 3, 00:10:14.362 "num_base_bdevs_discovered": 1, 00:10:14.362 "num_base_bdevs_operational": 3, 00:10:14.362 "base_bdevs_list": [ 00:10:14.362 { 00:10:14.362 "name": "BaseBdev1", 00:10:14.362 "uuid": "c6dfcd39-5664-40b2-9b50-29948c91a087", 00:10:14.362 "is_configured": true, 00:10:14.362 "data_offset": 2048, 00:10:14.362 "data_size": 63488 00:10:14.362 }, 00:10:14.362 { 00:10:14.362 "name": "BaseBdev2", 00:10:14.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.362 "is_configured": false, 00:10:14.362 "data_offset": 0, 00:10:14.362 "data_size": 0 00:10:14.362 }, 00:10:14.362 { 00:10:14.362 "name": "BaseBdev3", 00:10:14.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.362 "is_configured": false, 00:10:14.362 "data_offset": 0, 00:10:14.362 "data_size": 0 00:10:14.362 } 00:10:14.362 ] 00:10:14.362 }' 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.362 16:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.930 [2024-09-28 16:11:29.359191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.930 BaseBdev2 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.930 [ 00:10:14.930 { 00:10:14.930 "name": "BaseBdev2", 00:10:14.930 "aliases": [ 00:10:14.930 "f8907901-3abd-4cd2-9bc2-53587ec552c3" 00:10:14.930 ], 00:10:14.930 "product_name": "Malloc disk", 00:10:14.930 "block_size": 512, 00:10:14.930 "num_blocks": 65536, 00:10:14.930 "uuid": "f8907901-3abd-4cd2-9bc2-53587ec552c3", 00:10:14.930 "assigned_rate_limits": { 00:10:14.930 "rw_ios_per_sec": 0, 00:10:14.930 "rw_mbytes_per_sec": 0, 00:10:14.930 "r_mbytes_per_sec": 0, 00:10:14.930 "w_mbytes_per_sec": 0 00:10:14.930 }, 00:10:14.930 "claimed": true, 00:10:14.930 "claim_type": "exclusive_write", 00:10:14.930 "zoned": false, 00:10:14.930 "supported_io_types": { 00:10:14.930 "read": true, 00:10:14.930 "write": true, 00:10:14.930 "unmap": true, 00:10:14.930 "flush": true, 00:10:14.930 "reset": true, 00:10:14.930 "nvme_admin": false, 00:10:14.930 "nvme_io": false, 00:10:14.930 "nvme_io_md": false, 00:10:14.930 "write_zeroes": true, 00:10:14.930 "zcopy": true, 00:10:14.930 "get_zone_info": false, 00:10:14.930 "zone_management": false, 00:10:14.930 "zone_append": false, 00:10:14.930 "compare": false, 00:10:14.930 "compare_and_write": false, 00:10:14.930 "abort": true, 00:10:14.930 "seek_hole": false, 00:10:14.930 "seek_data": false, 00:10:14.930 "copy": true, 00:10:14.930 "nvme_iov_md": false 00:10:14.930 }, 00:10:14.930 "memory_domains": [ 00:10:14.930 { 00:10:14.930 "dma_device_id": "system", 00:10:14.930 "dma_device_type": 1 00:10:14.930 }, 00:10:14.930 { 00:10:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.930 "dma_device_type": 2 00:10:14.930 } 00:10:14.930 ], 00:10:14.930 "driver_specific": {} 00:10:14.930 } 00:10:14.930 ] 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.930 "name": "Existed_Raid", 00:10:14.930 "uuid": "6e902d80-f455-4c4f-a77d-b9b77fa93fa9", 00:10:14.930 "strip_size_kb": 0, 00:10:14.930 "state": "configuring", 00:10:14.930 "raid_level": "raid1", 00:10:14.930 "superblock": true, 00:10:14.930 "num_base_bdevs": 3, 00:10:14.930 "num_base_bdevs_discovered": 2, 00:10:14.930 "num_base_bdevs_operational": 3, 00:10:14.930 "base_bdevs_list": [ 00:10:14.930 { 00:10:14.930 "name": "BaseBdev1", 00:10:14.930 "uuid": "c6dfcd39-5664-40b2-9b50-29948c91a087", 00:10:14.930 "is_configured": true, 00:10:14.930 "data_offset": 2048, 00:10:14.930 "data_size": 63488 00:10:14.930 }, 00:10:14.930 { 00:10:14.930 "name": "BaseBdev2", 00:10:14.930 "uuid": "f8907901-3abd-4cd2-9bc2-53587ec552c3", 00:10:14.930 "is_configured": true, 00:10:14.930 "data_offset": 2048, 00:10:14.930 "data_size": 63488 00:10:14.930 }, 00:10:14.930 { 00:10:14.930 "name": "BaseBdev3", 00:10:14.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.930 "is_configured": false, 00:10:14.930 "data_offset": 0, 00:10:14.930 "data_size": 0 00:10:14.930 } 00:10:14.930 ] 00:10:14.930 }' 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.930 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.190 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.190 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.190 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.190 [2024-09-28 16:11:29.871779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.190 [2024-09-28 16:11:29.872182] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.190 BaseBdev3 00:10:15.190 [2024-09-28 16:11:29.872273] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.190 [2024-09-28 16:11:29.872582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:15.190 [2024-09-28 16:11:29.872743] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.190 [2024-09-28 16:11:29.872753] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:15.190 [2024-09-28 16:11:29.872928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.190 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.190 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:15.190 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.449 [ 00:10:15.449 { 00:10:15.449 "name": "BaseBdev3", 00:10:15.449 "aliases": [ 00:10:15.449 "0826321d-c617-487c-a308-1d6e40726a71" 00:10:15.449 ], 00:10:15.449 "product_name": "Malloc disk", 00:10:15.449 "block_size": 512, 00:10:15.449 "num_blocks": 65536, 00:10:15.449 "uuid": "0826321d-c617-487c-a308-1d6e40726a71", 00:10:15.449 "assigned_rate_limits": { 00:10:15.449 "rw_ios_per_sec": 0, 00:10:15.449 "rw_mbytes_per_sec": 0, 00:10:15.449 "r_mbytes_per_sec": 0, 00:10:15.449 "w_mbytes_per_sec": 0 00:10:15.449 }, 00:10:15.449 "claimed": true, 00:10:15.449 "claim_type": "exclusive_write", 00:10:15.449 "zoned": false, 00:10:15.449 "supported_io_types": { 00:10:15.449 "read": true, 00:10:15.449 "write": true, 00:10:15.449 "unmap": true, 00:10:15.449 "flush": true, 00:10:15.449 "reset": true, 00:10:15.449 "nvme_admin": false, 00:10:15.449 "nvme_io": false, 00:10:15.449 "nvme_io_md": false, 00:10:15.449 "write_zeroes": true, 00:10:15.449 "zcopy": true, 00:10:15.449 "get_zone_info": false, 00:10:15.449 "zone_management": false, 00:10:15.449 "zone_append": false, 00:10:15.449 "compare": false, 00:10:15.449 "compare_and_write": false, 00:10:15.449 "abort": true, 00:10:15.449 "seek_hole": false, 00:10:15.449 "seek_data": false, 00:10:15.449 "copy": true, 00:10:15.449 "nvme_iov_md": false 00:10:15.449 }, 00:10:15.449 "memory_domains": [ 00:10:15.449 { 00:10:15.449 "dma_device_id": "system", 00:10:15.449 "dma_device_type": 1 00:10:15.449 }, 00:10:15.449 { 00:10:15.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.449 "dma_device_type": 2 00:10:15.449 } 00:10:15.449 ], 00:10:15.449 "driver_specific": {} 00:10:15.449 } 00:10:15.449 ] 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.449 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.450 "name": "Existed_Raid", 00:10:15.450 "uuid": "6e902d80-f455-4c4f-a77d-b9b77fa93fa9", 00:10:15.450 "strip_size_kb": 0, 00:10:15.450 "state": "online", 00:10:15.450 "raid_level": "raid1", 00:10:15.450 "superblock": true, 00:10:15.450 "num_base_bdevs": 3, 00:10:15.450 "num_base_bdevs_discovered": 3, 00:10:15.450 "num_base_bdevs_operational": 3, 00:10:15.450 "base_bdevs_list": [ 00:10:15.450 { 00:10:15.450 "name": "BaseBdev1", 00:10:15.450 "uuid": "c6dfcd39-5664-40b2-9b50-29948c91a087", 00:10:15.450 "is_configured": true, 00:10:15.450 "data_offset": 2048, 00:10:15.450 "data_size": 63488 00:10:15.450 }, 00:10:15.450 { 00:10:15.450 "name": "BaseBdev2", 00:10:15.450 "uuid": "f8907901-3abd-4cd2-9bc2-53587ec552c3", 00:10:15.450 "is_configured": true, 00:10:15.450 "data_offset": 2048, 00:10:15.450 "data_size": 63488 00:10:15.450 }, 00:10:15.450 { 00:10:15.450 "name": "BaseBdev3", 00:10:15.450 "uuid": "0826321d-c617-487c-a308-1d6e40726a71", 00:10:15.450 "is_configured": true, 00:10:15.450 "data_offset": 2048, 00:10:15.450 "data_size": 63488 00:10:15.450 } 00:10:15.450 ] 00:10:15.450 }' 00:10:15.450 16:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.450 16:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.709 [2024-09-28 16:11:30.291385] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.709 "name": "Existed_Raid", 00:10:15.709 "aliases": [ 00:10:15.709 "6e902d80-f455-4c4f-a77d-b9b77fa93fa9" 00:10:15.709 ], 00:10:15.709 "product_name": "Raid Volume", 00:10:15.709 "block_size": 512, 00:10:15.709 "num_blocks": 63488, 00:10:15.709 "uuid": "6e902d80-f455-4c4f-a77d-b9b77fa93fa9", 00:10:15.709 "assigned_rate_limits": { 00:10:15.709 "rw_ios_per_sec": 0, 00:10:15.709 "rw_mbytes_per_sec": 0, 00:10:15.709 "r_mbytes_per_sec": 0, 00:10:15.709 "w_mbytes_per_sec": 0 00:10:15.709 }, 00:10:15.709 "claimed": false, 00:10:15.709 "zoned": false, 00:10:15.709 "supported_io_types": { 00:10:15.709 "read": true, 00:10:15.709 "write": true, 00:10:15.709 "unmap": false, 00:10:15.709 "flush": false, 00:10:15.709 "reset": true, 00:10:15.709 "nvme_admin": false, 00:10:15.709 "nvme_io": false, 00:10:15.709 "nvme_io_md": false, 00:10:15.709 "write_zeroes": true, 00:10:15.709 "zcopy": false, 00:10:15.709 "get_zone_info": false, 00:10:15.709 "zone_management": false, 00:10:15.709 "zone_append": false, 00:10:15.709 "compare": false, 00:10:15.709 "compare_and_write": false, 00:10:15.709 "abort": false, 00:10:15.709 "seek_hole": false, 00:10:15.709 "seek_data": false, 00:10:15.709 "copy": false, 00:10:15.709 "nvme_iov_md": false 00:10:15.709 }, 00:10:15.709 "memory_domains": [ 00:10:15.709 { 00:10:15.709 "dma_device_id": "system", 00:10:15.709 "dma_device_type": 1 00:10:15.709 }, 00:10:15.709 { 00:10:15.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.709 "dma_device_type": 2 00:10:15.709 }, 00:10:15.709 { 00:10:15.709 "dma_device_id": "system", 00:10:15.709 "dma_device_type": 1 00:10:15.709 }, 00:10:15.709 { 00:10:15.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.709 "dma_device_type": 2 00:10:15.709 }, 00:10:15.709 { 00:10:15.709 "dma_device_id": "system", 00:10:15.709 "dma_device_type": 1 00:10:15.709 }, 00:10:15.709 { 00:10:15.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.709 "dma_device_type": 2 00:10:15.709 } 00:10:15.709 ], 00:10:15.709 "driver_specific": { 00:10:15.709 "raid": { 00:10:15.709 "uuid": "6e902d80-f455-4c4f-a77d-b9b77fa93fa9", 00:10:15.709 "strip_size_kb": 0, 00:10:15.709 "state": "online", 00:10:15.709 "raid_level": "raid1", 00:10:15.709 "superblock": true, 00:10:15.709 "num_base_bdevs": 3, 00:10:15.709 "num_base_bdevs_discovered": 3, 00:10:15.709 "num_base_bdevs_operational": 3, 00:10:15.709 "base_bdevs_list": [ 00:10:15.709 { 00:10:15.709 "name": "BaseBdev1", 00:10:15.709 "uuid": "c6dfcd39-5664-40b2-9b50-29948c91a087", 00:10:15.709 "is_configured": true, 00:10:15.709 "data_offset": 2048, 00:10:15.709 "data_size": 63488 00:10:15.709 }, 00:10:15.709 { 00:10:15.709 "name": "BaseBdev2", 00:10:15.709 "uuid": "f8907901-3abd-4cd2-9bc2-53587ec552c3", 00:10:15.709 "is_configured": true, 00:10:15.709 "data_offset": 2048, 00:10:15.709 "data_size": 63488 00:10:15.709 }, 00:10:15.709 { 00:10:15.709 "name": "BaseBdev3", 00:10:15.709 "uuid": "0826321d-c617-487c-a308-1d6e40726a71", 00:10:15.709 "is_configured": true, 00:10:15.709 "data_offset": 2048, 00:10:15.709 "data_size": 63488 00:10:15.709 } 00:10:15.709 ] 00:10:15.709 } 00:10:15.709 } 00:10:15.709 }' 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.709 BaseBdev2 00:10:15.709 BaseBdev3' 00:10:15.709 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.968 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.968 [2024-09-28 16:11:30.562653] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.228 "name": "Existed_Raid", 00:10:16.228 "uuid": "6e902d80-f455-4c4f-a77d-b9b77fa93fa9", 00:10:16.228 "strip_size_kb": 0, 00:10:16.228 "state": "online", 00:10:16.228 "raid_level": "raid1", 00:10:16.228 "superblock": true, 00:10:16.228 "num_base_bdevs": 3, 00:10:16.228 "num_base_bdevs_discovered": 2, 00:10:16.228 "num_base_bdevs_operational": 2, 00:10:16.228 "base_bdevs_list": [ 00:10:16.228 { 00:10:16.228 "name": null, 00:10:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.228 "is_configured": false, 00:10:16.228 "data_offset": 0, 00:10:16.228 "data_size": 63488 00:10:16.228 }, 00:10:16.228 { 00:10:16.228 "name": "BaseBdev2", 00:10:16.228 "uuid": "f8907901-3abd-4cd2-9bc2-53587ec552c3", 00:10:16.228 "is_configured": true, 00:10:16.228 "data_offset": 2048, 00:10:16.228 "data_size": 63488 00:10:16.228 }, 00:10:16.228 { 00:10:16.228 "name": "BaseBdev3", 00:10:16.228 "uuid": "0826321d-c617-487c-a308-1d6e40726a71", 00:10:16.228 "is_configured": true, 00:10:16.228 "data_offset": 2048, 00:10:16.228 "data_size": 63488 00:10:16.228 } 00:10:16.228 ] 00:10:16.228 }' 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.228 16:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.487 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.487 [2024-09-28 16:11:31.112443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.746 [2024-09-28 16:11:31.266479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.746 [2024-09-28 16:11:31.266695] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.746 [2024-09-28 16:11:31.367762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.746 [2024-09-28 16:11:31.367913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.746 [2024-09-28 16:11:31.367960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.746 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.006 BaseBdev2 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.006 [ 00:10:17.006 { 00:10:17.006 "name": "BaseBdev2", 00:10:17.006 "aliases": [ 00:10:17.006 "52bff388-445f-42e1-a423-6968769adceb" 00:10:17.006 ], 00:10:17.006 "product_name": "Malloc disk", 00:10:17.006 "block_size": 512, 00:10:17.006 "num_blocks": 65536, 00:10:17.006 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:17.006 "assigned_rate_limits": { 00:10:17.006 "rw_ios_per_sec": 0, 00:10:17.006 "rw_mbytes_per_sec": 0, 00:10:17.006 "r_mbytes_per_sec": 0, 00:10:17.006 "w_mbytes_per_sec": 0 00:10:17.006 }, 00:10:17.006 "claimed": false, 00:10:17.006 "zoned": false, 00:10:17.006 "supported_io_types": { 00:10:17.006 "read": true, 00:10:17.006 "write": true, 00:10:17.006 "unmap": true, 00:10:17.006 "flush": true, 00:10:17.006 "reset": true, 00:10:17.006 "nvme_admin": false, 00:10:17.006 "nvme_io": false, 00:10:17.006 "nvme_io_md": false, 00:10:17.006 "write_zeroes": true, 00:10:17.006 "zcopy": true, 00:10:17.006 "get_zone_info": false, 00:10:17.006 "zone_management": false, 00:10:17.006 "zone_append": false, 00:10:17.006 "compare": false, 00:10:17.006 "compare_and_write": false, 00:10:17.006 "abort": true, 00:10:17.006 "seek_hole": false, 00:10:17.006 "seek_data": false, 00:10:17.006 "copy": true, 00:10:17.006 "nvme_iov_md": false 00:10:17.006 }, 00:10:17.006 "memory_domains": [ 00:10:17.006 { 00:10:17.006 "dma_device_id": "system", 00:10:17.006 "dma_device_type": 1 00:10:17.006 }, 00:10:17.006 { 00:10:17.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.006 "dma_device_type": 2 00:10:17.006 } 00:10:17.006 ], 00:10:17.006 "driver_specific": {} 00:10:17.006 } 00:10:17.006 ] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.006 BaseBdev3 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.006 [ 00:10:17.006 { 00:10:17.006 "name": "BaseBdev3", 00:10:17.006 "aliases": [ 00:10:17.006 "7d3d6759-862b-4b06-b40d-ea692e67bdf9" 00:10:17.006 ], 00:10:17.006 "product_name": "Malloc disk", 00:10:17.006 "block_size": 512, 00:10:17.006 "num_blocks": 65536, 00:10:17.006 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:17.006 "assigned_rate_limits": { 00:10:17.006 "rw_ios_per_sec": 0, 00:10:17.006 "rw_mbytes_per_sec": 0, 00:10:17.006 "r_mbytes_per_sec": 0, 00:10:17.006 "w_mbytes_per_sec": 0 00:10:17.006 }, 00:10:17.006 "claimed": false, 00:10:17.006 "zoned": false, 00:10:17.006 "supported_io_types": { 00:10:17.006 "read": true, 00:10:17.006 "write": true, 00:10:17.006 "unmap": true, 00:10:17.006 "flush": true, 00:10:17.006 "reset": true, 00:10:17.006 "nvme_admin": false, 00:10:17.006 "nvme_io": false, 00:10:17.006 "nvme_io_md": false, 00:10:17.006 "write_zeroes": true, 00:10:17.006 "zcopy": true, 00:10:17.006 "get_zone_info": false, 00:10:17.006 "zone_management": false, 00:10:17.006 "zone_append": false, 00:10:17.006 "compare": false, 00:10:17.006 "compare_and_write": false, 00:10:17.006 "abort": true, 00:10:17.006 "seek_hole": false, 00:10:17.006 "seek_data": false, 00:10:17.006 "copy": true, 00:10:17.006 "nvme_iov_md": false 00:10:17.006 }, 00:10:17.006 "memory_domains": [ 00:10:17.006 { 00:10:17.006 "dma_device_id": "system", 00:10:17.006 "dma_device_type": 1 00:10:17.006 }, 00:10:17.006 { 00:10:17.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.006 "dma_device_type": 2 00:10:17.006 } 00:10:17.006 ], 00:10:17.006 "driver_specific": {} 00:10:17.006 } 00:10:17.006 ] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.006 [2024-09-28 16:11:31.592208] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.006 [2024-09-28 16:11:31.592325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.006 [2024-09-28 16:11:31.592367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.006 [2024-09-28 16:11:31.594452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.006 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.007 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.007 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.007 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.007 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.007 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.007 "name": "Existed_Raid", 00:10:17.007 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:17.007 "strip_size_kb": 0, 00:10:17.007 "state": "configuring", 00:10:17.007 "raid_level": "raid1", 00:10:17.007 "superblock": true, 00:10:17.007 "num_base_bdevs": 3, 00:10:17.007 "num_base_bdevs_discovered": 2, 00:10:17.007 "num_base_bdevs_operational": 3, 00:10:17.007 "base_bdevs_list": [ 00:10:17.007 { 00:10:17.007 "name": "BaseBdev1", 00:10:17.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.007 "is_configured": false, 00:10:17.007 "data_offset": 0, 00:10:17.007 "data_size": 0 00:10:17.007 }, 00:10:17.007 { 00:10:17.007 "name": "BaseBdev2", 00:10:17.007 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:17.007 "is_configured": true, 00:10:17.007 "data_offset": 2048, 00:10:17.007 "data_size": 63488 00:10:17.007 }, 00:10:17.007 { 00:10:17.007 "name": "BaseBdev3", 00:10:17.007 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:17.007 "is_configured": true, 00:10:17.007 "data_offset": 2048, 00:10:17.007 "data_size": 63488 00:10:17.007 } 00:10:17.007 ] 00:10:17.007 }' 00:10:17.007 16:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.007 16:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.576 [2024-09-28 16:11:32.051347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.576 "name": "Existed_Raid", 00:10:17.576 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:17.576 "strip_size_kb": 0, 00:10:17.576 "state": "configuring", 00:10:17.576 "raid_level": "raid1", 00:10:17.576 "superblock": true, 00:10:17.576 "num_base_bdevs": 3, 00:10:17.576 "num_base_bdevs_discovered": 1, 00:10:17.576 "num_base_bdevs_operational": 3, 00:10:17.576 "base_bdevs_list": [ 00:10:17.576 { 00:10:17.576 "name": "BaseBdev1", 00:10:17.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.576 "is_configured": false, 00:10:17.576 "data_offset": 0, 00:10:17.576 "data_size": 0 00:10:17.576 }, 00:10:17.576 { 00:10:17.576 "name": null, 00:10:17.576 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:17.576 "is_configured": false, 00:10:17.576 "data_offset": 0, 00:10:17.576 "data_size": 63488 00:10:17.576 }, 00:10:17.576 { 00:10:17.576 "name": "BaseBdev3", 00:10:17.576 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:17.576 "is_configured": true, 00:10:17.576 "data_offset": 2048, 00:10:17.576 "data_size": 63488 00:10:17.576 } 00:10:17.576 ] 00:10:17.576 }' 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.576 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.836 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.836 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.836 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.836 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.836 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.096 [2024-09-28 16:11:32.569369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.096 BaseBdev1 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.096 [ 00:10:18.096 { 00:10:18.096 "name": "BaseBdev1", 00:10:18.096 "aliases": [ 00:10:18.096 "c4416e5c-b855-438a-bb71-fd6994e8e38c" 00:10:18.096 ], 00:10:18.096 "product_name": "Malloc disk", 00:10:18.096 "block_size": 512, 00:10:18.096 "num_blocks": 65536, 00:10:18.096 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:18.096 "assigned_rate_limits": { 00:10:18.096 "rw_ios_per_sec": 0, 00:10:18.096 "rw_mbytes_per_sec": 0, 00:10:18.096 "r_mbytes_per_sec": 0, 00:10:18.096 "w_mbytes_per_sec": 0 00:10:18.096 }, 00:10:18.096 "claimed": true, 00:10:18.096 "claim_type": "exclusive_write", 00:10:18.096 "zoned": false, 00:10:18.096 "supported_io_types": { 00:10:18.096 "read": true, 00:10:18.096 "write": true, 00:10:18.096 "unmap": true, 00:10:18.096 "flush": true, 00:10:18.096 "reset": true, 00:10:18.096 "nvme_admin": false, 00:10:18.096 "nvme_io": false, 00:10:18.096 "nvme_io_md": false, 00:10:18.096 "write_zeroes": true, 00:10:18.096 "zcopy": true, 00:10:18.096 "get_zone_info": false, 00:10:18.096 "zone_management": false, 00:10:18.096 "zone_append": false, 00:10:18.096 "compare": false, 00:10:18.096 "compare_and_write": false, 00:10:18.096 "abort": true, 00:10:18.096 "seek_hole": false, 00:10:18.096 "seek_data": false, 00:10:18.096 "copy": true, 00:10:18.096 "nvme_iov_md": false 00:10:18.096 }, 00:10:18.096 "memory_domains": [ 00:10:18.096 { 00:10:18.096 "dma_device_id": "system", 00:10:18.096 "dma_device_type": 1 00:10:18.096 }, 00:10:18.096 { 00:10:18.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.096 "dma_device_type": 2 00:10:18.096 } 00:10:18.096 ], 00:10:18.096 "driver_specific": {} 00:10:18.096 } 00:10:18.096 ] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.096 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.096 "name": "Existed_Raid", 00:10:18.096 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:18.096 "strip_size_kb": 0, 00:10:18.096 "state": "configuring", 00:10:18.096 "raid_level": "raid1", 00:10:18.096 "superblock": true, 00:10:18.096 "num_base_bdevs": 3, 00:10:18.096 "num_base_bdevs_discovered": 2, 00:10:18.096 "num_base_bdevs_operational": 3, 00:10:18.096 "base_bdevs_list": [ 00:10:18.096 { 00:10:18.096 "name": "BaseBdev1", 00:10:18.096 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:18.096 "is_configured": true, 00:10:18.096 "data_offset": 2048, 00:10:18.096 "data_size": 63488 00:10:18.096 }, 00:10:18.096 { 00:10:18.096 "name": null, 00:10:18.096 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:18.096 "is_configured": false, 00:10:18.096 "data_offset": 0, 00:10:18.096 "data_size": 63488 00:10:18.096 }, 00:10:18.096 { 00:10:18.097 "name": "BaseBdev3", 00:10:18.097 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:18.097 "is_configured": true, 00:10:18.097 "data_offset": 2048, 00:10:18.097 "data_size": 63488 00:10:18.097 } 00:10:18.097 ] 00:10:18.097 }' 00:10:18.097 16:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.097 16:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.665 [2024-09-28 16:11:33.096567] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.665 "name": "Existed_Raid", 00:10:18.665 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:18.665 "strip_size_kb": 0, 00:10:18.665 "state": "configuring", 00:10:18.665 "raid_level": "raid1", 00:10:18.665 "superblock": true, 00:10:18.665 "num_base_bdevs": 3, 00:10:18.665 "num_base_bdevs_discovered": 1, 00:10:18.665 "num_base_bdevs_operational": 3, 00:10:18.665 "base_bdevs_list": [ 00:10:18.665 { 00:10:18.665 "name": "BaseBdev1", 00:10:18.665 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:18.665 "is_configured": true, 00:10:18.665 "data_offset": 2048, 00:10:18.665 "data_size": 63488 00:10:18.665 }, 00:10:18.665 { 00:10:18.665 "name": null, 00:10:18.665 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:18.665 "is_configured": false, 00:10:18.665 "data_offset": 0, 00:10:18.665 "data_size": 63488 00:10:18.665 }, 00:10:18.665 { 00:10:18.665 "name": null, 00:10:18.665 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:18.665 "is_configured": false, 00:10:18.665 "data_offset": 0, 00:10:18.665 "data_size": 63488 00:10:18.665 } 00:10:18.665 ] 00:10:18.665 }' 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.665 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.924 [2024-09-28 16:11:33.583897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.924 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.183 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.183 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.183 "name": "Existed_Raid", 00:10:19.183 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:19.183 "strip_size_kb": 0, 00:10:19.183 "state": "configuring", 00:10:19.183 "raid_level": "raid1", 00:10:19.183 "superblock": true, 00:10:19.183 "num_base_bdevs": 3, 00:10:19.183 "num_base_bdevs_discovered": 2, 00:10:19.183 "num_base_bdevs_operational": 3, 00:10:19.183 "base_bdevs_list": [ 00:10:19.183 { 00:10:19.183 "name": "BaseBdev1", 00:10:19.183 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:19.183 "is_configured": true, 00:10:19.183 "data_offset": 2048, 00:10:19.183 "data_size": 63488 00:10:19.183 }, 00:10:19.183 { 00:10:19.183 "name": null, 00:10:19.183 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:19.183 "is_configured": false, 00:10:19.183 "data_offset": 0, 00:10:19.183 "data_size": 63488 00:10:19.183 }, 00:10:19.183 { 00:10:19.183 "name": "BaseBdev3", 00:10:19.183 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:19.183 "is_configured": true, 00:10:19.183 "data_offset": 2048, 00:10:19.183 "data_size": 63488 00:10:19.183 } 00:10:19.183 ] 00:10:19.183 }' 00:10:19.183 16:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.184 16:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.443 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.443 [2024-09-28 16:11:34.067090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.702 "name": "Existed_Raid", 00:10:19.702 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:19.702 "strip_size_kb": 0, 00:10:19.702 "state": "configuring", 00:10:19.702 "raid_level": "raid1", 00:10:19.702 "superblock": true, 00:10:19.702 "num_base_bdevs": 3, 00:10:19.702 "num_base_bdevs_discovered": 1, 00:10:19.702 "num_base_bdevs_operational": 3, 00:10:19.702 "base_bdevs_list": [ 00:10:19.702 { 00:10:19.702 "name": null, 00:10:19.702 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:19.702 "is_configured": false, 00:10:19.702 "data_offset": 0, 00:10:19.702 "data_size": 63488 00:10:19.702 }, 00:10:19.702 { 00:10:19.702 "name": null, 00:10:19.702 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:19.702 "is_configured": false, 00:10:19.702 "data_offset": 0, 00:10:19.702 "data_size": 63488 00:10:19.702 }, 00:10:19.702 { 00:10:19.702 "name": "BaseBdev3", 00:10:19.702 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:19.702 "is_configured": true, 00:10:19.702 "data_offset": 2048, 00:10:19.702 "data_size": 63488 00:10:19.702 } 00:10:19.702 ] 00:10:19.702 }' 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.702 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.271 [2024-09-28 16:11:34.692792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.271 "name": "Existed_Raid", 00:10:20.271 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:20.271 "strip_size_kb": 0, 00:10:20.271 "state": "configuring", 00:10:20.271 "raid_level": "raid1", 00:10:20.271 "superblock": true, 00:10:20.271 "num_base_bdevs": 3, 00:10:20.271 "num_base_bdevs_discovered": 2, 00:10:20.271 "num_base_bdevs_operational": 3, 00:10:20.271 "base_bdevs_list": [ 00:10:20.271 { 00:10:20.271 "name": null, 00:10:20.271 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:20.271 "is_configured": false, 00:10:20.271 "data_offset": 0, 00:10:20.271 "data_size": 63488 00:10:20.271 }, 00:10:20.271 { 00:10:20.271 "name": "BaseBdev2", 00:10:20.271 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:20.271 "is_configured": true, 00:10:20.271 "data_offset": 2048, 00:10:20.271 "data_size": 63488 00:10:20.271 }, 00:10:20.271 { 00:10:20.271 "name": "BaseBdev3", 00:10:20.271 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:20.271 "is_configured": true, 00:10:20.271 "data_offset": 2048, 00:10:20.271 "data_size": 63488 00:10:20.271 } 00:10:20.271 ] 00:10:20.271 }' 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.271 16:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.530 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c4416e5c-b855-438a-bb71-fd6994e8e38c 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.790 [2024-09-28 16:11:35.257259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.790 [2024-09-28 16:11:35.257593] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.790 [2024-09-28 16:11:35.257649] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.790 [2024-09-28 16:11:35.257956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:20.790 NewBaseBdev 00:10:20.790 [2024-09-28 16:11:35.258147] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.790 [2024-09-28 16:11:35.258164] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:20.790 [2024-09-28 16:11:35.258324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.790 [ 00:10:20.790 { 00:10:20.790 "name": "NewBaseBdev", 00:10:20.790 "aliases": [ 00:10:20.790 "c4416e5c-b855-438a-bb71-fd6994e8e38c" 00:10:20.790 ], 00:10:20.790 "product_name": "Malloc disk", 00:10:20.790 "block_size": 512, 00:10:20.790 "num_blocks": 65536, 00:10:20.790 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:20.790 "assigned_rate_limits": { 00:10:20.790 "rw_ios_per_sec": 0, 00:10:20.790 "rw_mbytes_per_sec": 0, 00:10:20.790 "r_mbytes_per_sec": 0, 00:10:20.790 "w_mbytes_per_sec": 0 00:10:20.790 }, 00:10:20.790 "claimed": true, 00:10:20.790 "claim_type": "exclusive_write", 00:10:20.790 "zoned": false, 00:10:20.790 "supported_io_types": { 00:10:20.790 "read": true, 00:10:20.790 "write": true, 00:10:20.790 "unmap": true, 00:10:20.790 "flush": true, 00:10:20.790 "reset": true, 00:10:20.790 "nvme_admin": false, 00:10:20.790 "nvme_io": false, 00:10:20.790 "nvme_io_md": false, 00:10:20.790 "write_zeroes": true, 00:10:20.790 "zcopy": true, 00:10:20.790 "get_zone_info": false, 00:10:20.790 "zone_management": false, 00:10:20.790 "zone_append": false, 00:10:20.790 "compare": false, 00:10:20.790 "compare_and_write": false, 00:10:20.790 "abort": true, 00:10:20.790 "seek_hole": false, 00:10:20.790 "seek_data": false, 00:10:20.790 "copy": true, 00:10:20.790 "nvme_iov_md": false 00:10:20.790 }, 00:10:20.790 "memory_domains": [ 00:10:20.790 { 00:10:20.790 "dma_device_id": "system", 00:10:20.790 "dma_device_type": 1 00:10:20.790 }, 00:10:20.790 { 00:10:20.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.790 "dma_device_type": 2 00:10:20.790 } 00:10:20.790 ], 00:10:20.790 "driver_specific": {} 00:10:20.790 } 00:10:20.790 ] 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.790 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.791 "name": "Existed_Raid", 00:10:20.791 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:20.791 "strip_size_kb": 0, 00:10:20.791 "state": "online", 00:10:20.791 "raid_level": "raid1", 00:10:20.791 "superblock": true, 00:10:20.791 "num_base_bdevs": 3, 00:10:20.791 "num_base_bdevs_discovered": 3, 00:10:20.791 "num_base_bdevs_operational": 3, 00:10:20.791 "base_bdevs_list": [ 00:10:20.791 { 00:10:20.791 "name": "NewBaseBdev", 00:10:20.791 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:20.791 "is_configured": true, 00:10:20.791 "data_offset": 2048, 00:10:20.791 "data_size": 63488 00:10:20.791 }, 00:10:20.791 { 00:10:20.791 "name": "BaseBdev2", 00:10:20.791 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:20.791 "is_configured": true, 00:10:20.791 "data_offset": 2048, 00:10:20.791 "data_size": 63488 00:10:20.791 }, 00:10:20.791 { 00:10:20.791 "name": "BaseBdev3", 00:10:20.791 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:20.791 "is_configured": true, 00:10:20.791 "data_offset": 2048, 00:10:20.791 "data_size": 63488 00:10:20.791 } 00:10:20.791 ] 00:10:20.791 }' 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.791 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.359 [2024-09-28 16:11:35.744692] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.359 "name": "Existed_Raid", 00:10:21.359 "aliases": [ 00:10:21.359 "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e" 00:10:21.359 ], 00:10:21.359 "product_name": "Raid Volume", 00:10:21.359 "block_size": 512, 00:10:21.359 "num_blocks": 63488, 00:10:21.359 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:21.359 "assigned_rate_limits": { 00:10:21.359 "rw_ios_per_sec": 0, 00:10:21.359 "rw_mbytes_per_sec": 0, 00:10:21.359 "r_mbytes_per_sec": 0, 00:10:21.359 "w_mbytes_per_sec": 0 00:10:21.359 }, 00:10:21.359 "claimed": false, 00:10:21.359 "zoned": false, 00:10:21.359 "supported_io_types": { 00:10:21.359 "read": true, 00:10:21.359 "write": true, 00:10:21.359 "unmap": false, 00:10:21.359 "flush": false, 00:10:21.359 "reset": true, 00:10:21.359 "nvme_admin": false, 00:10:21.359 "nvme_io": false, 00:10:21.359 "nvme_io_md": false, 00:10:21.359 "write_zeroes": true, 00:10:21.359 "zcopy": false, 00:10:21.359 "get_zone_info": false, 00:10:21.359 "zone_management": false, 00:10:21.359 "zone_append": false, 00:10:21.359 "compare": false, 00:10:21.359 "compare_and_write": false, 00:10:21.359 "abort": false, 00:10:21.359 "seek_hole": false, 00:10:21.359 "seek_data": false, 00:10:21.359 "copy": false, 00:10:21.359 "nvme_iov_md": false 00:10:21.359 }, 00:10:21.359 "memory_domains": [ 00:10:21.359 { 00:10:21.359 "dma_device_id": "system", 00:10:21.359 "dma_device_type": 1 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.359 "dma_device_type": 2 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "system", 00:10:21.359 "dma_device_type": 1 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.359 "dma_device_type": 2 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "system", 00:10:21.359 "dma_device_type": 1 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.359 "dma_device_type": 2 00:10:21.359 } 00:10:21.359 ], 00:10:21.359 "driver_specific": { 00:10:21.359 "raid": { 00:10:21.359 "uuid": "c568c0d3-49a9-4bb6-b2d9-b7b97a3a8b7e", 00:10:21.359 "strip_size_kb": 0, 00:10:21.359 "state": "online", 00:10:21.359 "raid_level": "raid1", 00:10:21.359 "superblock": true, 00:10:21.359 "num_base_bdevs": 3, 00:10:21.359 "num_base_bdevs_discovered": 3, 00:10:21.359 "num_base_bdevs_operational": 3, 00:10:21.359 "base_bdevs_list": [ 00:10:21.359 { 00:10:21.359 "name": "NewBaseBdev", 00:10:21.359 "uuid": "c4416e5c-b855-438a-bb71-fd6994e8e38c", 00:10:21.359 "is_configured": true, 00:10:21.359 "data_offset": 2048, 00:10:21.359 "data_size": 63488 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "name": "BaseBdev2", 00:10:21.359 "uuid": "52bff388-445f-42e1-a423-6968769adceb", 00:10:21.359 "is_configured": true, 00:10:21.359 "data_offset": 2048, 00:10:21.359 "data_size": 63488 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "name": "BaseBdev3", 00:10:21.359 "uuid": "7d3d6759-862b-4b06-b40d-ea692e67bdf9", 00:10:21.359 "is_configured": true, 00:10:21.359 "data_offset": 2048, 00:10:21.359 "data_size": 63488 00:10:21.359 } 00:10:21.359 ] 00:10:21.359 } 00:10:21.359 } 00:10:21.359 }' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.359 BaseBdev2 00:10:21.359 BaseBdev3' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.359 16:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.360 16:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.360 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.360 16:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.360 16:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.360 16:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.360 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.360 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.360 [2024-09-28 16:11:36.039917] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.360 [2024-09-28 16:11:36.039993] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.360 [2024-09-28 16:11:36.040125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.360 [2024-09-28 16:11:36.040452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.360 [2024-09-28 16:11:36.040504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68043 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68043 ']' 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68043 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68043 00:10:21.635 killing process with pid 68043 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68043' 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68043 00:10:21.635 [2024-09-28 16:11:36.093754] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.635 16:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68043 00:10:21.909 [2024-09-28 16:11:36.408418] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.290 16:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.290 00:10:23.290 real 0m10.905s 00:10:23.290 user 0m17.075s 00:10:23.290 sys 0m2.004s 00:10:23.290 16:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.290 ************************************ 00:10:23.290 END TEST raid_state_function_test_sb 00:10:23.290 ************************************ 00:10:23.290 16:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.290 16:11:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:23.290 16:11:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:23.290 16:11:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.290 16:11:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.290 ************************************ 00:10:23.290 START TEST raid_superblock_test 00:10:23.290 ************************************ 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68669 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68669 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68669 ']' 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.290 16:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.290 [2024-09-28 16:11:37.908918] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:23.290 [2024-09-28 16:11:37.909115] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68669 ] 00:10:23.550 [2024-09-28 16:11:38.071127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.810 [2024-09-28 16:11:38.316537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.070 [2024-09-28 16:11:38.543944] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.070 [2024-09-28 16:11:38.544095] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.070 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.070 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:24.070 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:24.070 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.070 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:24.070 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:24.070 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:24.071 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.071 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.071 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.071 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:24.071 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.071 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.331 malloc1 00:10:24.331 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.331 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 [2024-09-28 16:11:38.795682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.332 [2024-09-28 16:11:38.795836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.332 [2024-09-28 16:11:38.795882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:24.332 [2024-09-28 16:11:38.795917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.332 [2024-09-28 16:11:38.798300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.332 [2024-09-28 16:11:38.798385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.332 pt1 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 malloc2 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 [2024-09-28 16:11:38.866897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.332 [2024-09-28 16:11:38.867003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.332 [2024-09-28 16:11:38.867046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:24.332 [2024-09-28 16:11:38.867074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.332 [2024-09-28 16:11:38.869465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.332 [2024-09-28 16:11:38.869538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.332 pt2 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 malloc3 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 [2024-09-28 16:11:38.927210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.332 [2024-09-28 16:11:38.927322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.332 [2024-09-28 16:11:38.927361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:24.332 [2024-09-28 16:11:38.927389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.332 [2024-09-28 16:11:38.929691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.332 [2024-09-28 16:11:38.929774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.332 pt3 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 [2024-09-28 16:11:38.939274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.332 [2024-09-28 16:11:38.941350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.332 [2024-09-28 16:11:38.941465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.332 [2024-09-28 16:11:38.941666] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:24.332 [2024-09-28 16:11:38.941714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.332 [2024-09-28 16:11:38.941963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:24.332 [2024-09-28 16:11:38.942173] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:24.332 [2024-09-28 16:11:38.942216] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:24.332 [2024-09-28 16:11:38.942422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.332 "name": "raid_bdev1", 00:10:24.332 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:24.332 "strip_size_kb": 0, 00:10:24.332 "state": "online", 00:10:24.332 "raid_level": "raid1", 00:10:24.332 "superblock": true, 00:10:24.332 "num_base_bdevs": 3, 00:10:24.332 "num_base_bdevs_discovered": 3, 00:10:24.332 "num_base_bdevs_operational": 3, 00:10:24.332 "base_bdevs_list": [ 00:10:24.332 { 00:10:24.332 "name": "pt1", 00:10:24.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.332 "is_configured": true, 00:10:24.332 "data_offset": 2048, 00:10:24.332 "data_size": 63488 00:10:24.332 }, 00:10:24.332 { 00:10:24.332 "name": "pt2", 00:10:24.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.332 "is_configured": true, 00:10:24.332 "data_offset": 2048, 00:10:24.332 "data_size": 63488 00:10:24.332 }, 00:10:24.332 { 00:10:24.332 "name": "pt3", 00:10:24.332 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.332 "is_configured": true, 00:10:24.332 "data_offset": 2048, 00:10:24.332 "data_size": 63488 00:10:24.332 } 00:10:24.332 ] 00:10:24.332 }' 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.332 16:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.902 [2024-09-28 16:11:39.402774] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.902 "name": "raid_bdev1", 00:10:24.902 "aliases": [ 00:10:24.902 "ac698eb6-ea5e-46da-a21c-faaf8e18a7da" 00:10:24.902 ], 00:10:24.902 "product_name": "Raid Volume", 00:10:24.902 "block_size": 512, 00:10:24.902 "num_blocks": 63488, 00:10:24.902 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:24.902 "assigned_rate_limits": { 00:10:24.902 "rw_ios_per_sec": 0, 00:10:24.902 "rw_mbytes_per_sec": 0, 00:10:24.902 "r_mbytes_per_sec": 0, 00:10:24.902 "w_mbytes_per_sec": 0 00:10:24.902 }, 00:10:24.902 "claimed": false, 00:10:24.902 "zoned": false, 00:10:24.902 "supported_io_types": { 00:10:24.902 "read": true, 00:10:24.902 "write": true, 00:10:24.902 "unmap": false, 00:10:24.902 "flush": false, 00:10:24.902 "reset": true, 00:10:24.902 "nvme_admin": false, 00:10:24.902 "nvme_io": false, 00:10:24.902 "nvme_io_md": false, 00:10:24.902 "write_zeroes": true, 00:10:24.902 "zcopy": false, 00:10:24.902 "get_zone_info": false, 00:10:24.902 "zone_management": false, 00:10:24.902 "zone_append": false, 00:10:24.902 "compare": false, 00:10:24.902 "compare_and_write": false, 00:10:24.902 "abort": false, 00:10:24.902 "seek_hole": false, 00:10:24.902 "seek_data": false, 00:10:24.902 "copy": false, 00:10:24.902 "nvme_iov_md": false 00:10:24.902 }, 00:10:24.902 "memory_domains": [ 00:10:24.902 { 00:10:24.902 "dma_device_id": "system", 00:10:24.902 "dma_device_type": 1 00:10:24.902 }, 00:10:24.902 { 00:10:24.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.902 "dma_device_type": 2 00:10:24.902 }, 00:10:24.902 { 00:10:24.902 "dma_device_id": "system", 00:10:24.902 "dma_device_type": 1 00:10:24.902 }, 00:10:24.902 { 00:10:24.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.902 "dma_device_type": 2 00:10:24.902 }, 00:10:24.902 { 00:10:24.902 "dma_device_id": "system", 00:10:24.902 "dma_device_type": 1 00:10:24.902 }, 00:10:24.902 { 00:10:24.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.902 "dma_device_type": 2 00:10:24.902 } 00:10:24.902 ], 00:10:24.902 "driver_specific": { 00:10:24.902 "raid": { 00:10:24.902 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:24.902 "strip_size_kb": 0, 00:10:24.902 "state": "online", 00:10:24.902 "raid_level": "raid1", 00:10:24.902 "superblock": true, 00:10:24.902 "num_base_bdevs": 3, 00:10:24.902 "num_base_bdevs_discovered": 3, 00:10:24.902 "num_base_bdevs_operational": 3, 00:10:24.902 "base_bdevs_list": [ 00:10:24.902 { 00:10:24.902 "name": "pt1", 00:10:24.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.902 "is_configured": true, 00:10:24.902 "data_offset": 2048, 00:10:24.902 "data_size": 63488 00:10:24.902 }, 00:10:24.902 { 00:10:24.902 "name": "pt2", 00:10:24.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.902 "is_configured": true, 00:10:24.902 "data_offset": 2048, 00:10:24.902 "data_size": 63488 00:10:24.902 }, 00:10:24.902 { 00:10:24.902 "name": "pt3", 00:10:24.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.902 "is_configured": true, 00:10:24.902 "data_offset": 2048, 00:10:24.902 "data_size": 63488 00:10:24.902 } 00:10:24.902 ] 00:10:24.902 } 00:10:24.902 } 00:10:24.902 }' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:24.902 pt2 00:10:24.902 pt3' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.902 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.163 [2024-09-28 16:11:39.690269] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ac698eb6-ea5e-46da-a21c-faaf8e18a7da 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ac698eb6-ea5e-46da-a21c-faaf8e18a7da ']' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.163 [2024-09-28 16:11:39.733927] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.163 [2024-09-28 16:11:39.733994] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.163 [2024-09-28 16:11:39.734098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.163 [2024-09-28 16:11:39.734187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.163 [2024-09-28 16:11:39.734199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.163 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.164 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.423 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.423 [2024-09-28 16:11:39.885689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:25.424 [2024-09-28 16:11:39.887861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:25.424 [2024-09-28 16:11:39.887972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:25.424 [2024-09-28 16:11:39.888041] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:25.424 [2024-09-28 16:11:39.888126] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:25.424 [2024-09-28 16:11:39.888202] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:25.424 [2024-09-28 16:11:39.888311] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.424 [2024-09-28 16:11:39.888342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:25.424 request: 00:10:25.424 { 00:10:25.424 "name": "raid_bdev1", 00:10:25.424 "raid_level": "raid1", 00:10:25.424 "base_bdevs": [ 00:10:25.424 "malloc1", 00:10:25.424 "malloc2", 00:10:25.424 "malloc3" 00:10:25.424 ], 00:10:25.424 "superblock": false, 00:10:25.424 "method": "bdev_raid_create", 00:10:25.424 "req_id": 1 00:10:25.424 } 00:10:25.424 Got JSON-RPC error response 00:10:25.424 response: 00:10:25.424 { 00:10:25.424 "code": -17, 00:10:25.424 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:25.424 } 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.424 [2024-09-28 16:11:39.953554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.424 [2024-09-28 16:11:39.953659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.424 [2024-09-28 16:11:39.953702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:25.424 [2024-09-28 16:11:39.953733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.424 [2024-09-28 16:11:39.956187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.424 [2024-09-28 16:11:39.956275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.424 [2024-09-28 16:11:39.956368] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:25.424 [2024-09-28 16:11:39.956451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.424 pt1 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.424 16:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.424 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.424 "name": "raid_bdev1", 00:10:25.424 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:25.424 "strip_size_kb": 0, 00:10:25.424 "state": "configuring", 00:10:25.424 "raid_level": "raid1", 00:10:25.424 "superblock": true, 00:10:25.424 "num_base_bdevs": 3, 00:10:25.424 "num_base_bdevs_discovered": 1, 00:10:25.424 "num_base_bdevs_operational": 3, 00:10:25.424 "base_bdevs_list": [ 00:10:25.424 { 00:10:25.424 "name": "pt1", 00:10:25.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.424 "is_configured": true, 00:10:25.424 "data_offset": 2048, 00:10:25.424 "data_size": 63488 00:10:25.424 }, 00:10:25.424 { 00:10:25.424 "name": null, 00:10:25.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.424 "is_configured": false, 00:10:25.424 "data_offset": 2048, 00:10:25.424 "data_size": 63488 00:10:25.424 }, 00:10:25.424 { 00:10:25.424 "name": null, 00:10:25.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.424 "is_configured": false, 00:10:25.424 "data_offset": 2048, 00:10:25.424 "data_size": 63488 00:10:25.424 } 00:10:25.424 ] 00:10:25.424 }' 00:10:25.424 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.424 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.992 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.993 [2024-09-28 16:11:40.396821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.993 [2024-09-28 16:11:40.396938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.993 [2024-09-28 16:11:40.396981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:25.993 [2024-09-28 16:11:40.397011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.993 [2024-09-28 16:11:40.397457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.993 [2024-09-28 16:11:40.397512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.993 [2024-09-28 16:11:40.397614] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.993 [2024-09-28 16:11:40.397661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.993 pt2 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.993 [2024-09-28 16:11:40.408814] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.993 "name": "raid_bdev1", 00:10:25.993 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:25.993 "strip_size_kb": 0, 00:10:25.993 "state": "configuring", 00:10:25.993 "raid_level": "raid1", 00:10:25.993 "superblock": true, 00:10:25.993 "num_base_bdevs": 3, 00:10:25.993 "num_base_bdevs_discovered": 1, 00:10:25.993 "num_base_bdevs_operational": 3, 00:10:25.993 "base_bdevs_list": [ 00:10:25.993 { 00:10:25.993 "name": "pt1", 00:10:25.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.993 "is_configured": true, 00:10:25.993 "data_offset": 2048, 00:10:25.993 "data_size": 63488 00:10:25.993 }, 00:10:25.993 { 00:10:25.993 "name": null, 00:10:25.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.993 "is_configured": false, 00:10:25.993 "data_offset": 0, 00:10:25.993 "data_size": 63488 00:10:25.993 }, 00:10:25.993 { 00:10:25.993 "name": null, 00:10:25.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.993 "is_configured": false, 00:10:25.993 "data_offset": 2048, 00:10:25.993 "data_size": 63488 00:10:25.993 } 00:10:25.993 ] 00:10:25.993 }' 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.993 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 [2024-09-28 16:11:40.848024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.253 [2024-09-28 16:11:40.848157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.253 [2024-09-28 16:11:40.848191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:26.253 [2024-09-28 16:11:40.848220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.253 [2024-09-28 16:11:40.848706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.253 [2024-09-28 16:11:40.848768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.253 [2024-09-28 16:11:40.848883] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.253 [2024-09-28 16:11:40.848946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.253 pt2 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 [2024-09-28 16:11:40.860018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.253 [2024-09-28 16:11:40.860102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.253 [2024-09-28 16:11:40.860138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:26.253 [2024-09-28 16:11:40.860171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.253 [2024-09-28 16:11:40.860575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.253 [2024-09-28 16:11:40.860635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.253 [2024-09-28 16:11:40.860718] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:26.253 [2024-09-28 16:11:40.860766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.253 [2024-09-28 16:11:40.860913] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.253 [2024-09-28 16:11:40.860954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.253 [2024-09-28 16:11:40.861210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:26.253 [2024-09-28 16:11:40.861399] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.253 [2024-09-28 16:11:40.861410] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:26.253 [2024-09-28 16:11:40.861562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.253 pt3 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.253 "name": "raid_bdev1", 00:10:26.253 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:26.253 "strip_size_kb": 0, 00:10:26.253 "state": "online", 00:10:26.253 "raid_level": "raid1", 00:10:26.253 "superblock": true, 00:10:26.253 "num_base_bdevs": 3, 00:10:26.253 "num_base_bdevs_discovered": 3, 00:10:26.253 "num_base_bdevs_operational": 3, 00:10:26.253 "base_bdevs_list": [ 00:10:26.253 { 00:10:26.253 "name": "pt1", 00:10:26.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.253 "is_configured": true, 00:10:26.253 "data_offset": 2048, 00:10:26.253 "data_size": 63488 00:10:26.253 }, 00:10:26.253 { 00:10:26.253 "name": "pt2", 00:10:26.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.253 "is_configured": true, 00:10:26.253 "data_offset": 2048, 00:10:26.253 "data_size": 63488 00:10:26.253 }, 00:10:26.253 { 00:10:26.253 "name": "pt3", 00:10:26.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.253 "is_configured": true, 00:10:26.253 "data_offset": 2048, 00:10:26.253 "data_size": 63488 00:10:26.253 } 00:10:26.253 ] 00:10:26.253 }' 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.253 16:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.823 [2024-09-28 16:11:41.275556] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.823 "name": "raid_bdev1", 00:10:26.823 "aliases": [ 00:10:26.823 "ac698eb6-ea5e-46da-a21c-faaf8e18a7da" 00:10:26.823 ], 00:10:26.823 "product_name": "Raid Volume", 00:10:26.823 "block_size": 512, 00:10:26.823 "num_blocks": 63488, 00:10:26.823 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:26.823 "assigned_rate_limits": { 00:10:26.823 "rw_ios_per_sec": 0, 00:10:26.823 "rw_mbytes_per_sec": 0, 00:10:26.823 "r_mbytes_per_sec": 0, 00:10:26.823 "w_mbytes_per_sec": 0 00:10:26.823 }, 00:10:26.823 "claimed": false, 00:10:26.823 "zoned": false, 00:10:26.823 "supported_io_types": { 00:10:26.823 "read": true, 00:10:26.823 "write": true, 00:10:26.823 "unmap": false, 00:10:26.823 "flush": false, 00:10:26.823 "reset": true, 00:10:26.823 "nvme_admin": false, 00:10:26.823 "nvme_io": false, 00:10:26.823 "nvme_io_md": false, 00:10:26.823 "write_zeroes": true, 00:10:26.823 "zcopy": false, 00:10:26.823 "get_zone_info": false, 00:10:26.823 "zone_management": false, 00:10:26.823 "zone_append": false, 00:10:26.823 "compare": false, 00:10:26.823 "compare_and_write": false, 00:10:26.823 "abort": false, 00:10:26.823 "seek_hole": false, 00:10:26.823 "seek_data": false, 00:10:26.823 "copy": false, 00:10:26.823 "nvme_iov_md": false 00:10:26.823 }, 00:10:26.823 "memory_domains": [ 00:10:26.823 { 00:10:26.823 "dma_device_id": "system", 00:10:26.823 "dma_device_type": 1 00:10:26.823 }, 00:10:26.823 { 00:10:26.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.823 "dma_device_type": 2 00:10:26.823 }, 00:10:26.823 { 00:10:26.823 "dma_device_id": "system", 00:10:26.823 "dma_device_type": 1 00:10:26.823 }, 00:10:26.823 { 00:10:26.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.823 "dma_device_type": 2 00:10:26.823 }, 00:10:26.823 { 00:10:26.823 "dma_device_id": "system", 00:10:26.823 "dma_device_type": 1 00:10:26.823 }, 00:10:26.823 { 00:10:26.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.823 "dma_device_type": 2 00:10:26.823 } 00:10:26.823 ], 00:10:26.823 "driver_specific": { 00:10:26.823 "raid": { 00:10:26.823 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:26.823 "strip_size_kb": 0, 00:10:26.823 "state": "online", 00:10:26.823 "raid_level": "raid1", 00:10:26.823 "superblock": true, 00:10:26.823 "num_base_bdevs": 3, 00:10:26.823 "num_base_bdevs_discovered": 3, 00:10:26.823 "num_base_bdevs_operational": 3, 00:10:26.823 "base_bdevs_list": [ 00:10:26.823 { 00:10:26.823 "name": "pt1", 00:10:26.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.823 "is_configured": true, 00:10:26.823 "data_offset": 2048, 00:10:26.823 "data_size": 63488 00:10:26.823 }, 00:10:26.823 { 00:10:26.823 "name": "pt2", 00:10:26.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.823 "is_configured": true, 00:10:26.823 "data_offset": 2048, 00:10:26.823 "data_size": 63488 00:10:26.823 }, 00:10:26.823 { 00:10:26.823 "name": "pt3", 00:10:26.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.823 "is_configured": true, 00:10:26.823 "data_offset": 2048, 00:10:26.823 "data_size": 63488 00:10:26.823 } 00:10:26.823 ] 00:10:26.823 } 00:10:26.823 } 00:10:26.823 }' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.823 pt2 00:10:26.823 pt3' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.823 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.083 [2024-09-28 16:11:41.567124] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.083 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ac698eb6-ea5e-46da-a21c-faaf8e18a7da '!=' ac698eb6-ea5e-46da-a21c-faaf8e18a7da ']' 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 [2024-09-28 16:11:41.598841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.084 "name": "raid_bdev1", 00:10:27.084 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:27.084 "strip_size_kb": 0, 00:10:27.084 "state": "online", 00:10:27.084 "raid_level": "raid1", 00:10:27.084 "superblock": true, 00:10:27.084 "num_base_bdevs": 3, 00:10:27.084 "num_base_bdevs_discovered": 2, 00:10:27.084 "num_base_bdevs_operational": 2, 00:10:27.084 "base_bdevs_list": [ 00:10:27.084 { 00:10:27.084 "name": null, 00:10:27.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.084 "is_configured": false, 00:10:27.084 "data_offset": 0, 00:10:27.084 "data_size": 63488 00:10:27.084 }, 00:10:27.084 { 00:10:27.084 "name": "pt2", 00:10:27.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.084 "is_configured": true, 00:10:27.084 "data_offset": 2048, 00:10:27.084 "data_size": 63488 00:10:27.084 }, 00:10:27.084 { 00:10:27.084 "name": "pt3", 00:10:27.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.084 "is_configured": true, 00:10:27.084 "data_offset": 2048, 00:10:27.084 "data_size": 63488 00:10:27.084 } 00:10:27.084 ] 00:10:27.084 }' 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.084 16:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.653 [2024-09-28 16:11:42.046108] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.653 [2024-09-28 16:11:42.046179] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.653 [2024-09-28 16:11:42.046287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.653 [2024-09-28 16:11:42.046373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.653 [2024-09-28 16:11:42.046420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.653 [2024-09-28 16:11:42.129937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:27.653 [2024-09-28 16:11:42.130028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.653 [2024-09-28 16:11:42.130076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:27.653 [2024-09-28 16:11:42.130111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.653 [2024-09-28 16:11:42.132612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.653 [2024-09-28 16:11:42.132688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:27.653 [2024-09-28 16:11:42.132796] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:27.653 [2024-09-28 16:11:42.132864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.653 pt2 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.653 "name": "raid_bdev1", 00:10:27.653 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:27.653 "strip_size_kb": 0, 00:10:27.653 "state": "configuring", 00:10:27.653 "raid_level": "raid1", 00:10:27.653 "superblock": true, 00:10:27.653 "num_base_bdevs": 3, 00:10:27.653 "num_base_bdevs_discovered": 1, 00:10:27.653 "num_base_bdevs_operational": 2, 00:10:27.653 "base_bdevs_list": [ 00:10:27.653 { 00:10:27.653 "name": null, 00:10:27.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.653 "is_configured": false, 00:10:27.653 "data_offset": 2048, 00:10:27.653 "data_size": 63488 00:10:27.653 }, 00:10:27.653 { 00:10:27.653 "name": "pt2", 00:10:27.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.653 "is_configured": true, 00:10:27.653 "data_offset": 2048, 00:10:27.653 "data_size": 63488 00:10:27.653 }, 00:10:27.653 { 00:10:27.653 "name": null, 00:10:27.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.653 "is_configured": false, 00:10:27.653 "data_offset": 2048, 00:10:27.653 "data_size": 63488 00:10:27.653 } 00:10:27.653 ] 00:10:27.653 }' 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.653 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.914 [2024-09-28 16:11:42.565239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:27.914 [2024-09-28 16:11:42.565364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.914 [2024-09-28 16:11:42.565403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:27.914 [2024-09-28 16:11:42.565433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.914 [2024-09-28 16:11:42.565925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.914 [2024-09-28 16:11:42.565984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:27.914 [2024-09-28 16:11:42.566070] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:27.914 [2024-09-28 16:11:42.566102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:27.914 [2024-09-28 16:11:42.566252] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:27.914 [2024-09-28 16:11:42.566266] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.914 [2024-09-28 16:11:42.566526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:27.914 [2024-09-28 16:11:42.566671] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:27.914 [2024-09-28 16:11:42.566680] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:27.914 [2024-09-28 16:11:42.566839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.914 pt3 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.914 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.174 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.174 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.174 "name": "raid_bdev1", 00:10:28.174 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:28.174 "strip_size_kb": 0, 00:10:28.174 "state": "online", 00:10:28.174 "raid_level": "raid1", 00:10:28.174 "superblock": true, 00:10:28.174 "num_base_bdevs": 3, 00:10:28.174 "num_base_bdevs_discovered": 2, 00:10:28.174 "num_base_bdevs_operational": 2, 00:10:28.174 "base_bdevs_list": [ 00:10:28.174 { 00:10:28.174 "name": null, 00:10:28.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.174 "is_configured": false, 00:10:28.174 "data_offset": 2048, 00:10:28.174 "data_size": 63488 00:10:28.174 }, 00:10:28.174 { 00:10:28.174 "name": "pt2", 00:10:28.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.174 "is_configured": true, 00:10:28.174 "data_offset": 2048, 00:10:28.174 "data_size": 63488 00:10:28.174 }, 00:10:28.174 { 00:10:28.174 "name": "pt3", 00:10:28.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.174 "is_configured": true, 00:10:28.174 "data_offset": 2048, 00:10:28.174 "data_size": 63488 00:10:28.174 } 00:10:28.174 ] 00:10:28.174 }' 00:10:28.174 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.174 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 [2024-09-28 16:11:42.932524] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.434 [2024-09-28 16:11:42.932600] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.434 [2024-09-28 16:11:42.932723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.434 [2024-09-28 16:11:42.932807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.434 [2024-09-28 16:11:42.932850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 16:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 [2024-09-28 16:11:43.008408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.434 [2024-09-28 16:11:43.008515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.434 [2024-09-28 16:11:43.008554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:28.434 [2024-09-28 16:11:43.008600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.434 [2024-09-28 16:11:43.011069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.434 [2024-09-28 16:11:43.011152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.434 [2024-09-28 16:11:43.011276] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:28.434 [2024-09-28 16:11:43.011340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:28.434 [2024-09-28 16:11:43.011497] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:28.434 [2024-09-28 16:11:43.011549] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.434 [2024-09-28 16:11:43.011598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:28.434 [2024-09-28 16:11:43.011690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.434 pt1 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.434 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.434 "name": "raid_bdev1", 00:10:28.434 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:28.434 "strip_size_kb": 0, 00:10:28.434 "state": "configuring", 00:10:28.434 "raid_level": "raid1", 00:10:28.434 "superblock": true, 00:10:28.434 "num_base_bdevs": 3, 00:10:28.434 "num_base_bdevs_discovered": 1, 00:10:28.434 "num_base_bdevs_operational": 2, 00:10:28.434 "base_bdevs_list": [ 00:10:28.434 { 00:10:28.434 "name": null, 00:10:28.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.434 "is_configured": false, 00:10:28.434 "data_offset": 2048, 00:10:28.434 "data_size": 63488 00:10:28.434 }, 00:10:28.434 { 00:10:28.435 "name": "pt2", 00:10:28.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.435 "is_configured": true, 00:10:28.435 "data_offset": 2048, 00:10:28.435 "data_size": 63488 00:10:28.435 }, 00:10:28.435 { 00:10:28.435 "name": null, 00:10:28.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.435 "is_configured": false, 00:10:28.435 "data_offset": 2048, 00:10:28.435 "data_size": 63488 00:10:28.435 } 00:10:28.435 ] 00:10:28.435 }' 00:10:28.435 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.435 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.004 [2024-09-28 16:11:43.519524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:29.004 [2024-09-28 16:11:43.519618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.004 [2024-09-28 16:11:43.519656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:29.004 [2024-09-28 16:11:43.519687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.004 [2024-09-28 16:11:43.520160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.004 [2024-09-28 16:11:43.520216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:29.004 [2024-09-28 16:11:43.520332] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:29.004 [2024-09-28 16:11:43.520407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:29.004 [2024-09-28 16:11:43.520593] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:29.004 [2024-09-28 16:11:43.520631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.004 [2024-09-28 16:11:43.520944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:29.004 [2024-09-28 16:11:43.521139] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:29.004 [2024-09-28 16:11:43.521187] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:29.004 [2024-09-28 16:11:43.521397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.004 pt3 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.004 "name": "raid_bdev1", 00:10:29.004 "uuid": "ac698eb6-ea5e-46da-a21c-faaf8e18a7da", 00:10:29.004 "strip_size_kb": 0, 00:10:29.004 "state": "online", 00:10:29.004 "raid_level": "raid1", 00:10:29.004 "superblock": true, 00:10:29.004 "num_base_bdevs": 3, 00:10:29.004 "num_base_bdevs_discovered": 2, 00:10:29.004 "num_base_bdevs_operational": 2, 00:10:29.004 "base_bdevs_list": [ 00:10:29.004 { 00:10:29.004 "name": null, 00:10:29.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.004 "is_configured": false, 00:10:29.004 "data_offset": 2048, 00:10:29.004 "data_size": 63488 00:10:29.004 }, 00:10:29.004 { 00:10:29.004 "name": "pt2", 00:10:29.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.004 "is_configured": true, 00:10:29.004 "data_offset": 2048, 00:10:29.004 "data_size": 63488 00:10:29.004 }, 00:10:29.004 { 00:10:29.004 "name": "pt3", 00:10:29.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.004 "is_configured": true, 00:10:29.004 "data_offset": 2048, 00:10:29.004 "data_size": 63488 00:10:29.004 } 00:10:29.004 ] 00:10:29.004 }' 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.004 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.574 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:29.574 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:29.574 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.574 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.574 16:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.574 16:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.574 [2024-09-28 16:11:44.011005] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ac698eb6-ea5e-46da-a21c-faaf8e18a7da '!=' ac698eb6-ea5e-46da-a21c-faaf8e18a7da ']' 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68669 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68669 ']' 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68669 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68669 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.574 killing process with pid 68669 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68669' 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68669 00:10:29.574 [2024-09-28 16:11:44.087700] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.574 [2024-09-28 16:11:44.087849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.574 16:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68669 00:10:29.574 [2024-09-28 16:11:44.087922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.574 [2024-09-28 16:11:44.087937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:29.833 [2024-09-28 16:11:44.399072] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.212 16:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:31.212 00:10:31.212 real 0m7.910s 00:10:31.212 user 0m12.056s 00:10:31.212 sys 0m1.571s 00:10:31.212 16:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.212 16:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 ************************************ 00:10:31.212 END TEST raid_superblock_test 00:10:31.212 ************************************ 00:10:31.212 16:11:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:31.212 16:11:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:31.212 16:11:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.212 16:11:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 ************************************ 00:10:31.212 START TEST raid_read_error_test 00:10:31.212 ************************************ 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5FagFVSFE0 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69114 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69114 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69114 ']' 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.212 16:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.213 16:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.472 [2024-09-28 16:11:45.915243] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:31.472 [2024-09-28 16:11:45.915467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69114 ] 00:10:31.472 [2024-09-28 16:11:46.086198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.730 [2024-09-28 16:11:46.329742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.989 [2024-09-28 16:11:46.558440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.989 [2024-09-28 16:11:46.558486] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 BaseBdev1_malloc 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 true 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 [2024-09-28 16:11:46.806946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:32.249 [2024-09-28 16:11:46.807091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.249 [2024-09-28 16:11:46.807113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:32.249 [2024-09-28 16:11:46.807125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.249 [2024-09-28 16:11:46.809702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.249 [2024-09-28 16:11:46.809742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:32.249 BaseBdev1 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 BaseBdev2_malloc 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 true 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 [2024-09-28 16:11:46.901085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:32.249 [2024-09-28 16:11:46.901199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.249 [2024-09-28 16:11:46.901258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:32.249 [2024-09-28 16:11:46.901292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.249 [2024-09-28 16:11:46.903668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.249 [2024-09-28 16:11:46.903751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:32.249 BaseBdev2 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.508 BaseBdev3_malloc 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.508 true 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.508 [2024-09-28 16:11:46.973987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:32.508 [2024-09-28 16:11:46.974111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.508 [2024-09-28 16:11:46.974161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:32.508 [2024-09-28 16:11:46.974191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.508 [2024-09-28 16:11:46.976576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.508 [2024-09-28 16:11:46.976666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:32.508 BaseBdev3 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.508 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.508 [2024-09-28 16:11:46.986047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.508 [2024-09-28 16:11:46.988122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.508 [2024-09-28 16:11:46.988249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.508 [2024-09-28 16:11:46.988525] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.508 [2024-09-28 16:11:46.988574] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:32.508 [2024-09-28 16:11:46.988856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:32.508 [2024-09-28 16:11:46.989089] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.509 [2024-09-28 16:11:46.989138] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:32.509 [2024-09-28 16:11:46.989339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.509 16:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.509 16:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.509 16:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.509 "name": "raid_bdev1", 00:10:32.509 "uuid": "5383ff1b-a837-4f2c-8534-1a06ca60a5fc", 00:10:32.509 "strip_size_kb": 0, 00:10:32.509 "state": "online", 00:10:32.509 "raid_level": "raid1", 00:10:32.509 "superblock": true, 00:10:32.509 "num_base_bdevs": 3, 00:10:32.509 "num_base_bdevs_discovered": 3, 00:10:32.509 "num_base_bdevs_operational": 3, 00:10:32.509 "base_bdevs_list": [ 00:10:32.509 { 00:10:32.509 "name": "BaseBdev1", 00:10:32.509 "uuid": "03f6891d-8f07-592b-83ae-c8a1306a0c95", 00:10:32.509 "is_configured": true, 00:10:32.509 "data_offset": 2048, 00:10:32.509 "data_size": 63488 00:10:32.509 }, 00:10:32.509 { 00:10:32.509 "name": "BaseBdev2", 00:10:32.509 "uuid": "e948d633-091e-5602-bf95-61f356a1167e", 00:10:32.509 "is_configured": true, 00:10:32.509 "data_offset": 2048, 00:10:32.509 "data_size": 63488 00:10:32.509 }, 00:10:32.509 { 00:10:32.509 "name": "BaseBdev3", 00:10:32.509 "uuid": "c0e9b23e-cd07-57b8-af85-a9e2839f5318", 00:10:32.509 "is_configured": true, 00:10:32.509 "data_offset": 2048, 00:10:32.509 "data_size": 63488 00:10:32.509 } 00:10:32.509 ] 00:10:32.509 }' 00:10:32.509 16:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.509 16:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.768 16:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:32.768 16:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:33.028 [2024-09-28 16:11:47.506395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.966 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.967 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.967 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.967 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.967 "name": "raid_bdev1", 00:10:33.967 "uuid": "5383ff1b-a837-4f2c-8534-1a06ca60a5fc", 00:10:33.967 "strip_size_kb": 0, 00:10:33.967 "state": "online", 00:10:33.967 "raid_level": "raid1", 00:10:33.967 "superblock": true, 00:10:33.967 "num_base_bdevs": 3, 00:10:33.967 "num_base_bdevs_discovered": 3, 00:10:33.967 "num_base_bdevs_operational": 3, 00:10:33.967 "base_bdevs_list": [ 00:10:33.967 { 00:10:33.967 "name": "BaseBdev1", 00:10:33.967 "uuid": "03f6891d-8f07-592b-83ae-c8a1306a0c95", 00:10:33.967 "is_configured": true, 00:10:33.967 "data_offset": 2048, 00:10:33.967 "data_size": 63488 00:10:33.967 }, 00:10:33.967 { 00:10:33.967 "name": "BaseBdev2", 00:10:33.967 "uuid": "e948d633-091e-5602-bf95-61f356a1167e", 00:10:33.967 "is_configured": true, 00:10:33.967 "data_offset": 2048, 00:10:33.967 "data_size": 63488 00:10:33.967 }, 00:10:33.967 { 00:10:33.967 "name": "BaseBdev3", 00:10:33.967 "uuid": "c0e9b23e-cd07-57b8-af85-a9e2839f5318", 00:10:33.967 "is_configured": true, 00:10:33.967 "data_offset": 2048, 00:10:33.967 "data_size": 63488 00:10:33.967 } 00:10:33.967 ] 00:10:33.967 }' 00:10:33.967 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.967 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.536 [2024-09-28 16:11:48.916197] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.536 [2024-09-28 16:11:48.916328] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.536 [2024-09-28 16:11:48.919171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.536 [2024-09-28 16:11:48.919279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.536 [2024-09-28 16:11:48.919432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.536 [2024-09-28 16:11:48.919497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:34.536 { 00:10:34.536 "results": [ 00:10:34.536 { 00:10:34.536 "job": "raid_bdev1", 00:10:34.536 "core_mask": "0x1", 00:10:34.536 "workload": "randrw", 00:10:34.536 "percentage": 50, 00:10:34.536 "status": "finished", 00:10:34.536 "queue_depth": 1, 00:10:34.536 "io_size": 131072, 00:10:34.536 "runtime": 1.410605, 00:10:34.536 "iops": 10779.06288436522, 00:10:34.536 "mibps": 1347.3828605456524, 00:10:34.536 "io_failed": 0, 00:10:34.536 "io_timeout": 0, 00:10:34.536 "avg_latency_us": 90.35895788129909, 00:10:34.536 "min_latency_us": 21.910917030567685, 00:10:34.536 "max_latency_us": 1509.6174672489083 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "core_count": 1 00:10:34.536 } 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69114 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69114 ']' 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69114 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69114 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.536 killing process with pid 69114 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69114' 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69114 00:10:34.536 [2024-09-28 16:11:48.958390] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.536 16:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69114 00:10:34.536 [2024-09-28 16:11:49.204523] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5FagFVSFE0 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:35.915 ************************************ 00:10:35.915 END TEST raid_read_error_test 00:10:35.915 ************************************ 00:10:35.915 00:10:35.915 real 0m4.788s 00:10:35.915 user 0m5.491s 00:10:35.915 sys 0m0.705s 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.915 16:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.182 16:11:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:36.182 16:11:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:36.182 16:11:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.182 16:11:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.182 ************************************ 00:10:36.182 START TEST raid_write_error_test 00:10:36.182 ************************************ 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:36.182 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xfqEXSwsTZ 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69260 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69260 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69260 ']' 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.183 16:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.183 [2024-09-28 16:11:50.776287] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:36.183 [2024-09-28 16:11:50.776505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69260 ] 00:10:36.452 [2024-09-28 16:11:50.943803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.711 [2024-09-28 16:11:51.196346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.971 [2024-09-28 16:11:51.425002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.971 [2024-09-28 16:11:51.425046] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.971 BaseBdev1_malloc 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.971 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 true 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 [2024-09-28 16:11:51.670721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:37.232 [2024-09-28 16:11:51.670855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.232 [2024-09-28 16:11:51.670923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:37.232 [2024-09-28 16:11:51.670961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.232 [2024-09-28 16:11:51.673346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.232 [2024-09-28 16:11:51.673434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:37.232 BaseBdev1 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 BaseBdev2_malloc 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 true 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 [2024-09-28 16:11:51.770315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:37.232 [2024-09-28 16:11:51.770421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.232 [2024-09-28 16:11:51.770470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:37.232 [2024-09-28 16:11:51.770500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.232 [2024-09-28 16:11:51.772869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.232 [2024-09-28 16:11:51.772960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:37.232 BaseBdev2 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 BaseBdev3_malloc 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 true 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 [2024-09-28 16:11:51.842850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:37.232 [2024-09-28 16:11:51.842984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.232 [2024-09-28 16:11:51.843005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:37.232 [2024-09-28 16:11:51.843017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.232 [2024-09-28 16:11:51.845461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.232 [2024-09-28 16:11:51.845535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:37.232 BaseBdev3 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 [2024-09-28 16:11:51.854914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.232 [2024-09-28 16:11:51.857009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.232 [2024-09-28 16:11:51.857124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.232 [2024-09-28 16:11:51.857385] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:37.232 [2024-09-28 16:11:51.857449] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:37.232 [2024-09-28 16:11:51.857704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:37.232 [2024-09-28 16:11:51.857916] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:37.232 [2024-09-28 16:11:51.857962] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:37.232 [2024-09-28 16:11:51.858152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.232 "name": "raid_bdev1", 00:10:37.232 "uuid": "e695e812-f643-4af7-83da-8d8215edb776", 00:10:37.232 "strip_size_kb": 0, 00:10:37.232 "state": "online", 00:10:37.232 "raid_level": "raid1", 00:10:37.232 "superblock": true, 00:10:37.232 "num_base_bdevs": 3, 00:10:37.232 "num_base_bdevs_discovered": 3, 00:10:37.232 "num_base_bdevs_operational": 3, 00:10:37.232 "base_bdevs_list": [ 00:10:37.232 { 00:10:37.232 "name": "BaseBdev1", 00:10:37.232 "uuid": "748c26aa-f265-5bf2-8ad6-7e7cce6aefbf", 00:10:37.232 "is_configured": true, 00:10:37.232 "data_offset": 2048, 00:10:37.232 "data_size": 63488 00:10:37.232 }, 00:10:37.232 { 00:10:37.232 "name": "BaseBdev2", 00:10:37.232 "uuid": "36d9122a-88c6-58f2-a1d3-54d18bf9b58c", 00:10:37.232 "is_configured": true, 00:10:37.232 "data_offset": 2048, 00:10:37.232 "data_size": 63488 00:10:37.232 }, 00:10:37.232 { 00:10:37.232 "name": "BaseBdev3", 00:10:37.232 "uuid": "a399bb1e-a106-5ad4-be8e-ce2b32927f23", 00:10:37.232 "is_configured": true, 00:10:37.232 "data_offset": 2048, 00:10:37.232 "data_size": 63488 00:10:37.232 } 00:10:37.232 ] 00:10:37.232 }' 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.232 16:11:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.802 16:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:37.802 16:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:37.802 [2024-09-28 16:11:52.355602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.740 [2024-09-28 16:11:53.279343] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:38.740 [2024-09-28 16:11:53.279509] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.740 [2024-09-28 16:11:53.279790] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.740 "name": "raid_bdev1", 00:10:38.740 "uuid": "e695e812-f643-4af7-83da-8d8215edb776", 00:10:38.740 "strip_size_kb": 0, 00:10:38.740 "state": "online", 00:10:38.740 "raid_level": "raid1", 00:10:38.740 "superblock": true, 00:10:38.740 "num_base_bdevs": 3, 00:10:38.740 "num_base_bdevs_discovered": 2, 00:10:38.740 "num_base_bdevs_operational": 2, 00:10:38.740 "base_bdevs_list": [ 00:10:38.740 { 00:10:38.740 "name": null, 00:10:38.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.740 "is_configured": false, 00:10:38.740 "data_offset": 0, 00:10:38.740 "data_size": 63488 00:10:38.740 }, 00:10:38.740 { 00:10:38.740 "name": "BaseBdev2", 00:10:38.740 "uuid": "36d9122a-88c6-58f2-a1d3-54d18bf9b58c", 00:10:38.740 "is_configured": true, 00:10:38.740 "data_offset": 2048, 00:10:38.740 "data_size": 63488 00:10:38.740 }, 00:10:38.740 { 00:10:38.740 "name": "BaseBdev3", 00:10:38.740 "uuid": "a399bb1e-a106-5ad4-be8e-ce2b32927f23", 00:10:38.740 "is_configured": true, 00:10:38.740 "data_offset": 2048, 00:10:38.740 "data_size": 63488 00:10:38.740 } 00:10:38.740 ] 00:10:38.740 }' 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.740 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.308 [2024-09-28 16:11:53.730147] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.308 [2024-09-28 16:11:53.730310] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.308 [2024-09-28 16:11:53.732967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.308 [2024-09-28 16:11:53.733063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.308 [2024-09-28 16:11:53.733183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.308 [2024-09-28 16:11:53.733251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:39.308 { 00:10:39.308 "results": [ 00:10:39.308 { 00:10:39.308 "job": "raid_bdev1", 00:10:39.308 "core_mask": "0x1", 00:10:39.308 "workload": "randrw", 00:10:39.308 "percentage": 50, 00:10:39.308 "status": "finished", 00:10:39.308 "queue_depth": 1, 00:10:39.308 "io_size": 131072, 00:10:39.308 "runtime": 1.375192, 00:10:39.308 "iops": 12113.944816432906, 00:10:39.308 "mibps": 1514.2431020541133, 00:10:39.308 "io_failed": 0, 00:10:39.308 "io_timeout": 0, 00:10:39.308 "avg_latency_us": 80.10220526769825, 00:10:39.308 "min_latency_us": 22.805240174672488, 00:10:39.308 "max_latency_us": 1359.3711790393013 00:10:39.308 } 00:10:39.308 ], 00:10:39.308 "core_count": 1 00:10:39.308 } 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69260 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69260 ']' 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69260 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69260 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69260' 00:10:39.308 killing process with pid 69260 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69260 00:10:39.308 [2024-09-28 16:11:53.780326] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.308 16:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69260 00:10:39.567 [2024-09-28 16:11:54.023969] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xfqEXSwsTZ 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:40.947 ************************************ 00:10:40.947 END TEST raid_write_error_test 00:10:40.947 ************************************ 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:40.947 00:10:40.947 real 0m4.766s 00:10:40.947 user 0m5.449s 00:10:40.947 sys 0m0.692s 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.947 16:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.947 16:11:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:40.947 16:11:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:40.947 16:11:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:40.947 16:11:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:40.947 16:11:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.947 16:11:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.947 ************************************ 00:10:40.947 START TEST raid_state_function_test 00:10:40.947 ************************************ 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69405 00:10:40.947 Process raid pid: 69405 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69405' 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69405 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69405 ']' 00:10:40.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.947 16:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.947 [2024-09-28 16:11:55.613740] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:40.947 [2024-09-28 16:11:55.613892] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.207 [2024-09-28 16:11:55.787434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.466 [2024-09-28 16:11:56.032327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.726 [2024-09-28 16:11:56.265020] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.726 [2024-09-28 16:11:56.265058] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.986 [2024-09-28 16:11:56.425566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.986 [2024-09-28 16:11:56.425697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.986 [2024-09-28 16:11:56.425744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.986 [2024-09-28 16:11:56.425768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.986 [2024-09-28 16:11:56.425785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.986 [2024-09-28 16:11:56.425805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.986 [2024-09-28 16:11:56.425822] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.986 [2024-09-28 16:11:56.425845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.986 "name": "Existed_Raid", 00:10:41.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.986 "strip_size_kb": 64, 00:10:41.986 "state": "configuring", 00:10:41.986 "raid_level": "raid0", 00:10:41.986 "superblock": false, 00:10:41.986 "num_base_bdevs": 4, 00:10:41.986 "num_base_bdevs_discovered": 0, 00:10:41.986 "num_base_bdevs_operational": 4, 00:10:41.986 "base_bdevs_list": [ 00:10:41.986 { 00:10:41.986 "name": "BaseBdev1", 00:10:41.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.986 "is_configured": false, 00:10:41.986 "data_offset": 0, 00:10:41.986 "data_size": 0 00:10:41.986 }, 00:10:41.986 { 00:10:41.986 "name": "BaseBdev2", 00:10:41.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.986 "is_configured": false, 00:10:41.986 "data_offset": 0, 00:10:41.986 "data_size": 0 00:10:41.986 }, 00:10:41.986 { 00:10:41.986 "name": "BaseBdev3", 00:10:41.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.986 "is_configured": false, 00:10:41.986 "data_offset": 0, 00:10:41.986 "data_size": 0 00:10:41.986 }, 00:10:41.986 { 00:10:41.986 "name": "BaseBdev4", 00:10:41.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.986 "is_configured": false, 00:10:41.986 "data_offset": 0, 00:10:41.986 "data_size": 0 00:10:41.986 } 00:10:41.986 ] 00:10:41.986 }' 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.986 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.247 [2024-09-28 16:11:56.876724] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.247 [2024-09-28 16:11:56.876829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.247 [2024-09-28 16:11:56.888712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.247 [2024-09-28 16:11:56.888797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.247 [2024-09-28 16:11:56.888839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.247 [2024-09-28 16:11:56.888862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.247 [2024-09-28 16:11:56.888880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.247 [2024-09-28 16:11:56.888900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.247 [2024-09-28 16:11:56.888917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.247 [2024-09-28 16:11:56.888937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.247 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.507 [2024-09-28 16:11:56.953097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.507 BaseBdev1 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.507 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.507 [ 00:10:42.507 { 00:10:42.507 "name": "BaseBdev1", 00:10:42.507 "aliases": [ 00:10:42.507 "769656f6-5a95-4a14-89eb-ddba9d5862c5" 00:10:42.507 ], 00:10:42.507 "product_name": "Malloc disk", 00:10:42.507 "block_size": 512, 00:10:42.507 "num_blocks": 65536, 00:10:42.507 "uuid": "769656f6-5a95-4a14-89eb-ddba9d5862c5", 00:10:42.507 "assigned_rate_limits": { 00:10:42.507 "rw_ios_per_sec": 0, 00:10:42.507 "rw_mbytes_per_sec": 0, 00:10:42.507 "r_mbytes_per_sec": 0, 00:10:42.507 "w_mbytes_per_sec": 0 00:10:42.507 }, 00:10:42.507 "claimed": true, 00:10:42.507 "claim_type": "exclusive_write", 00:10:42.507 "zoned": false, 00:10:42.507 "supported_io_types": { 00:10:42.507 "read": true, 00:10:42.507 "write": true, 00:10:42.507 "unmap": true, 00:10:42.507 "flush": true, 00:10:42.507 "reset": true, 00:10:42.507 "nvme_admin": false, 00:10:42.507 "nvme_io": false, 00:10:42.507 "nvme_io_md": false, 00:10:42.507 "write_zeroes": true, 00:10:42.507 "zcopy": true, 00:10:42.507 "get_zone_info": false, 00:10:42.507 "zone_management": false, 00:10:42.507 "zone_append": false, 00:10:42.507 "compare": false, 00:10:42.507 "compare_and_write": false, 00:10:42.507 "abort": true, 00:10:42.507 "seek_hole": false, 00:10:42.507 "seek_data": false, 00:10:42.507 "copy": true, 00:10:42.507 "nvme_iov_md": false 00:10:42.507 }, 00:10:42.507 "memory_domains": [ 00:10:42.507 { 00:10:42.507 "dma_device_id": "system", 00:10:42.507 "dma_device_type": 1 00:10:42.507 }, 00:10:42.507 { 00:10:42.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.507 "dma_device_type": 2 00:10:42.507 } 00:10:42.507 ], 00:10:42.507 "driver_specific": {} 00:10:42.507 } 00:10:42.507 ] 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.508 16:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.508 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.508 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.508 "name": "Existed_Raid", 00:10:42.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.508 "strip_size_kb": 64, 00:10:42.508 "state": "configuring", 00:10:42.508 "raid_level": "raid0", 00:10:42.508 "superblock": false, 00:10:42.508 "num_base_bdevs": 4, 00:10:42.508 "num_base_bdevs_discovered": 1, 00:10:42.508 "num_base_bdevs_operational": 4, 00:10:42.508 "base_bdevs_list": [ 00:10:42.508 { 00:10:42.508 "name": "BaseBdev1", 00:10:42.508 "uuid": "769656f6-5a95-4a14-89eb-ddba9d5862c5", 00:10:42.508 "is_configured": true, 00:10:42.508 "data_offset": 0, 00:10:42.508 "data_size": 65536 00:10:42.508 }, 00:10:42.508 { 00:10:42.508 "name": "BaseBdev2", 00:10:42.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.508 "is_configured": false, 00:10:42.508 "data_offset": 0, 00:10:42.508 "data_size": 0 00:10:42.508 }, 00:10:42.508 { 00:10:42.508 "name": "BaseBdev3", 00:10:42.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.508 "is_configured": false, 00:10:42.508 "data_offset": 0, 00:10:42.508 "data_size": 0 00:10:42.508 }, 00:10:42.508 { 00:10:42.508 "name": "BaseBdev4", 00:10:42.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.508 "is_configured": false, 00:10:42.508 "data_offset": 0, 00:10:42.508 "data_size": 0 00:10:42.508 } 00:10:42.508 ] 00:10:42.508 }' 00:10:42.508 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.508 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.078 [2024-09-28 16:11:57.500186] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.078 [2024-09-28 16:11:57.500252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.078 [2024-09-28 16:11:57.512212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.078 [2024-09-28 16:11:57.514411] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.078 [2024-09-28 16:11:57.514484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.078 [2024-09-28 16:11:57.514528] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.078 [2024-09-28 16:11:57.514552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.078 [2024-09-28 16:11:57.514570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.078 [2024-09-28 16:11:57.514590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.078 "name": "Existed_Raid", 00:10:43.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.078 "strip_size_kb": 64, 00:10:43.078 "state": "configuring", 00:10:43.078 "raid_level": "raid0", 00:10:43.078 "superblock": false, 00:10:43.078 "num_base_bdevs": 4, 00:10:43.078 "num_base_bdevs_discovered": 1, 00:10:43.078 "num_base_bdevs_operational": 4, 00:10:43.078 "base_bdevs_list": [ 00:10:43.078 { 00:10:43.078 "name": "BaseBdev1", 00:10:43.078 "uuid": "769656f6-5a95-4a14-89eb-ddba9d5862c5", 00:10:43.078 "is_configured": true, 00:10:43.078 "data_offset": 0, 00:10:43.078 "data_size": 65536 00:10:43.078 }, 00:10:43.078 { 00:10:43.078 "name": "BaseBdev2", 00:10:43.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.078 "is_configured": false, 00:10:43.078 "data_offset": 0, 00:10:43.078 "data_size": 0 00:10:43.078 }, 00:10:43.078 { 00:10:43.078 "name": "BaseBdev3", 00:10:43.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.078 "is_configured": false, 00:10:43.078 "data_offset": 0, 00:10:43.078 "data_size": 0 00:10:43.078 }, 00:10:43.078 { 00:10:43.078 "name": "BaseBdev4", 00:10:43.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.078 "is_configured": false, 00:10:43.078 "data_offset": 0, 00:10:43.078 "data_size": 0 00:10:43.078 } 00:10:43.078 ] 00:10:43.078 }' 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.078 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.338 16:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.338 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.338 16:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.338 [2024-09-28 16:11:58.006913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.338 BaseBdev2 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.338 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.598 [ 00:10:43.598 { 00:10:43.598 "name": "BaseBdev2", 00:10:43.598 "aliases": [ 00:10:43.598 "55f0cb47-f330-44d7-b3e4-f722d243348a" 00:10:43.598 ], 00:10:43.598 "product_name": "Malloc disk", 00:10:43.598 "block_size": 512, 00:10:43.598 "num_blocks": 65536, 00:10:43.598 "uuid": "55f0cb47-f330-44d7-b3e4-f722d243348a", 00:10:43.598 "assigned_rate_limits": { 00:10:43.598 "rw_ios_per_sec": 0, 00:10:43.598 "rw_mbytes_per_sec": 0, 00:10:43.598 "r_mbytes_per_sec": 0, 00:10:43.598 "w_mbytes_per_sec": 0 00:10:43.598 }, 00:10:43.598 "claimed": true, 00:10:43.598 "claim_type": "exclusive_write", 00:10:43.598 "zoned": false, 00:10:43.598 "supported_io_types": { 00:10:43.598 "read": true, 00:10:43.598 "write": true, 00:10:43.598 "unmap": true, 00:10:43.598 "flush": true, 00:10:43.598 "reset": true, 00:10:43.598 "nvme_admin": false, 00:10:43.598 "nvme_io": false, 00:10:43.598 "nvme_io_md": false, 00:10:43.598 "write_zeroes": true, 00:10:43.598 "zcopy": true, 00:10:43.598 "get_zone_info": false, 00:10:43.598 "zone_management": false, 00:10:43.598 "zone_append": false, 00:10:43.598 "compare": false, 00:10:43.598 "compare_and_write": false, 00:10:43.598 "abort": true, 00:10:43.598 "seek_hole": false, 00:10:43.598 "seek_data": false, 00:10:43.598 "copy": true, 00:10:43.598 "nvme_iov_md": false 00:10:43.598 }, 00:10:43.598 "memory_domains": [ 00:10:43.598 { 00:10:43.598 "dma_device_id": "system", 00:10:43.598 "dma_device_type": 1 00:10:43.598 }, 00:10:43.598 { 00:10:43.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.598 "dma_device_type": 2 00:10:43.598 } 00:10:43.598 ], 00:10:43.598 "driver_specific": {} 00:10:43.598 } 00:10:43.598 ] 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.598 "name": "Existed_Raid", 00:10:43.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.598 "strip_size_kb": 64, 00:10:43.598 "state": "configuring", 00:10:43.598 "raid_level": "raid0", 00:10:43.598 "superblock": false, 00:10:43.598 "num_base_bdevs": 4, 00:10:43.598 "num_base_bdevs_discovered": 2, 00:10:43.598 "num_base_bdevs_operational": 4, 00:10:43.598 "base_bdevs_list": [ 00:10:43.598 { 00:10:43.598 "name": "BaseBdev1", 00:10:43.598 "uuid": "769656f6-5a95-4a14-89eb-ddba9d5862c5", 00:10:43.598 "is_configured": true, 00:10:43.598 "data_offset": 0, 00:10:43.598 "data_size": 65536 00:10:43.598 }, 00:10:43.598 { 00:10:43.598 "name": "BaseBdev2", 00:10:43.598 "uuid": "55f0cb47-f330-44d7-b3e4-f722d243348a", 00:10:43.598 "is_configured": true, 00:10:43.598 "data_offset": 0, 00:10:43.598 "data_size": 65536 00:10:43.598 }, 00:10:43.598 { 00:10:43.598 "name": "BaseBdev3", 00:10:43.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.598 "is_configured": false, 00:10:43.598 "data_offset": 0, 00:10:43.598 "data_size": 0 00:10:43.598 }, 00:10:43.598 { 00:10:43.598 "name": "BaseBdev4", 00:10:43.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.598 "is_configured": false, 00:10:43.598 "data_offset": 0, 00:10:43.598 "data_size": 0 00:10:43.598 } 00:10:43.598 ] 00:10:43.598 }' 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.598 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.857 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:43.857 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.857 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.129 [2024-09-28 16:11:58.556550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.129 BaseBdev3 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.129 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.129 [ 00:10:44.129 { 00:10:44.129 "name": "BaseBdev3", 00:10:44.129 "aliases": [ 00:10:44.129 "03eb80c2-109f-4d41-ab0c-e6c308f382a3" 00:10:44.129 ], 00:10:44.129 "product_name": "Malloc disk", 00:10:44.129 "block_size": 512, 00:10:44.129 "num_blocks": 65536, 00:10:44.129 "uuid": "03eb80c2-109f-4d41-ab0c-e6c308f382a3", 00:10:44.129 "assigned_rate_limits": { 00:10:44.129 "rw_ios_per_sec": 0, 00:10:44.129 "rw_mbytes_per_sec": 0, 00:10:44.129 "r_mbytes_per_sec": 0, 00:10:44.129 "w_mbytes_per_sec": 0 00:10:44.129 }, 00:10:44.129 "claimed": true, 00:10:44.129 "claim_type": "exclusive_write", 00:10:44.129 "zoned": false, 00:10:44.129 "supported_io_types": { 00:10:44.129 "read": true, 00:10:44.129 "write": true, 00:10:44.129 "unmap": true, 00:10:44.129 "flush": true, 00:10:44.129 "reset": true, 00:10:44.129 "nvme_admin": false, 00:10:44.129 "nvme_io": false, 00:10:44.129 "nvme_io_md": false, 00:10:44.129 "write_zeroes": true, 00:10:44.129 "zcopy": true, 00:10:44.129 "get_zone_info": false, 00:10:44.129 "zone_management": false, 00:10:44.129 "zone_append": false, 00:10:44.129 "compare": false, 00:10:44.129 "compare_and_write": false, 00:10:44.129 "abort": true, 00:10:44.129 "seek_hole": false, 00:10:44.129 "seek_data": false, 00:10:44.129 "copy": true, 00:10:44.129 "nvme_iov_md": false 00:10:44.129 }, 00:10:44.129 "memory_domains": [ 00:10:44.129 { 00:10:44.129 "dma_device_id": "system", 00:10:44.129 "dma_device_type": 1 00:10:44.129 }, 00:10:44.129 { 00:10:44.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.129 "dma_device_type": 2 00:10:44.130 } 00:10:44.130 ], 00:10:44.130 "driver_specific": {} 00:10:44.130 } 00:10:44.130 ] 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.130 "name": "Existed_Raid", 00:10:44.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.130 "strip_size_kb": 64, 00:10:44.130 "state": "configuring", 00:10:44.130 "raid_level": "raid0", 00:10:44.130 "superblock": false, 00:10:44.130 "num_base_bdevs": 4, 00:10:44.130 "num_base_bdevs_discovered": 3, 00:10:44.130 "num_base_bdevs_operational": 4, 00:10:44.130 "base_bdevs_list": [ 00:10:44.130 { 00:10:44.130 "name": "BaseBdev1", 00:10:44.130 "uuid": "769656f6-5a95-4a14-89eb-ddba9d5862c5", 00:10:44.130 "is_configured": true, 00:10:44.130 "data_offset": 0, 00:10:44.130 "data_size": 65536 00:10:44.130 }, 00:10:44.130 { 00:10:44.130 "name": "BaseBdev2", 00:10:44.130 "uuid": "55f0cb47-f330-44d7-b3e4-f722d243348a", 00:10:44.130 "is_configured": true, 00:10:44.130 "data_offset": 0, 00:10:44.130 "data_size": 65536 00:10:44.130 }, 00:10:44.130 { 00:10:44.130 "name": "BaseBdev3", 00:10:44.130 "uuid": "03eb80c2-109f-4d41-ab0c-e6c308f382a3", 00:10:44.130 "is_configured": true, 00:10:44.130 "data_offset": 0, 00:10:44.130 "data_size": 65536 00:10:44.130 }, 00:10:44.130 { 00:10:44.130 "name": "BaseBdev4", 00:10:44.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.130 "is_configured": false, 00:10:44.130 "data_offset": 0, 00:10:44.130 "data_size": 0 00:10:44.130 } 00:10:44.130 ] 00:10:44.130 }' 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.130 16:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.699 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.699 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.700 [2024-09-28 16:11:59.122579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.700 [2024-09-28 16:11:59.122702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.700 [2024-09-28 16:11:59.122730] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:44.700 [2024-09-28 16:11:59.123097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:44.700 [2024-09-28 16:11:59.123354] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.700 [2024-09-28 16:11:59.123406] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:44.700 [2024-09-28 16:11:59.123733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.700 BaseBdev4 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.700 [ 00:10:44.700 { 00:10:44.700 "name": "BaseBdev4", 00:10:44.700 "aliases": [ 00:10:44.700 "2f2ed079-2485-42a0-b21b-b12662ffdef8" 00:10:44.700 ], 00:10:44.700 "product_name": "Malloc disk", 00:10:44.700 "block_size": 512, 00:10:44.700 "num_blocks": 65536, 00:10:44.700 "uuid": "2f2ed079-2485-42a0-b21b-b12662ffdef8", 00:10:44.700 "assigned_rate_limits": { 00:10:44.700 "rw_ios_per_sec": 0, 00:10:44.700 "rw_mbytes_per_sec": 0, 00:10:44.700 "r_mbytes_per_sec": 0, 00:10:44.700 "w_mbytes_per_sec": 0 00:10:44.700 }, 00:10:44.700 "claimed": true, 00:10:44.700 "claim_type": "exclusive_write", 00:10:44.700 "zoned": false, 00:10:44.700 "supported_io_types": { 00:10:44.700 "read": true, 00:10:44.700 "write": true, 00:10:44.700 "unmap": true, 00:10:44.700 "flush": true, 00:10:44.700 "reset": true, 00:10:44.700 "nvme_admin": false, 00:10:44.700 "nvme_io": false, 00:10:44.700 "nvme_io_md": false, 00:10:44.700 "write_zeroes": true, 00:10:44.700 "zcopy": true, 00:10:44.700 "get_zone_info": false, 00:10:44.700 "zone_management": false, 00:10:44.700 "zone_append": false, 00:10:44.700 "compare": false, 00:10:44.700 "compare_and_write": false, 00:10:44.700 "abort": true, 00:10:44.700 "seek_hole": false, 00:10:44.700 "seek_data": false, 00:10:44.700 "copy": true, 00:10:44.700 "nvme_iov_md": false 00:10:44.700 }, 00:10:44.700 "memory_domains": [ 00:10:44.700 { 00:10:44.700 "dma_device_id": "system", 00:10:44.700 "dma_device_type": 1 00:10:44.700 }, 00:10:44.700 { 00:10:44.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.700 "dma_device_type": 2 00:10:44.700 } 00:10:44.700 ], 00:10:44.700 "driver_specific": {} 00:10:44.700 } 00:10:44.700 ] 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.700 "name": "Existed_Raid", 00:10:44.700 "uuid": "6b5aeb13-39e7-4204-997a-d410384bbc2b", 00:10:44.700 "strip_size_kb": 64, 00:10:44.700 "state": "online", 00:10:44.700 "raid_level": "raid0", 00:10:44.700 "superblock": false, 00:10:44.700 "num_base_bdevs": 4, 00:10:44.700 "num_base_bdevs_discovered": 4, 00:10:44.700 "num_base_bdevs_operational": 4, 00:10:44.700 "base_bdevs_list": [ 00:10:44.700 { 00:10:44.700 "name": "BaseBdev1", 00:10:44.700 "uuid": "769656f6-5a95-4a14-89eb-ddba9d5862c5", 00:10:44.700 "is_configured": true, 00:10:44.700 "data_offset": 0, 00:10:44.700 "data_size": 65536 00:10:44.700 }, 00:10:44.700 { 00:10:44.700 "name": "BaseBdev2", 00:10:44.700 "uuid": "55f0cb47-f330-44d7-b3e4-f722d243348a", 00:10:44.700 "is_configured": true, 00:10:44.700 "data_offset": 0, 00:10:44.700 "data_size": 65536 00:10:44.700 }, 00:10:44.700 { 00:10:44.700 "name": "BaseBdev3", 00:10:44.700 "uuid": "03eb80c2-109f-4d41-ab0c-e6c308f382a3", 00:10:44.700 "is_configured": true, 00:10:44.700 "data_offset": 0, 00:10:44.700 "data_size": 65536 00:10:44.700 }, 00:10:44.700 { 00:10:44.700 "name": "BaseBdev4", 00:10:44.700 "uuid": "2f2ed079-2485-42a0-b21b-b12662ffdef8", 00:10:44.700 "is_configured": true, 00:10:44.700 "data_offset": 0, 00:10:44.700 "data_size": 65536 00:10:44.700 } 00:10:44.700 ] 00:10:44.700 }' 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.700 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.960 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.960 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.960 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.960 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.960 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.960 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.961 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.961 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.961 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.961 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.961 [2024-09-28 16:11:59.634006] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.220 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.221 "name": "Existed_Raid", 00:10:45.221 "aliases": [ 00:10:45.221 "6b5aeb13-39e7-4204-997a-d410384bbc2b" 00:10:45.221 ], 00:10:45.221 "product_name": "Raid Volume", 00:10:45.221 "block_size": 512, 00:10:45.221 "num_blocks": 262144, 00:10:45.221 "uuid": "6b5aeb13-39e7-4204-997a-d410384bbc2b", 00:10:45.221 "assigned_rate_limits": { 00:10:45.221 "rw_ios_per_sec": 0, 00:10:45.221 "rw_mbytes_per_sec": 0, 00:10:45.221 "r_mbytes_per_sec": 0, 00:10:45.221 "w_mbytes_per_sec": 0 00:10:45.221 }, 00:10:45.221 "claimed": false, 00:10:45.221 "zoned": false, 00:10:45.221 "supported_io_types": { 00:10:45.221 "read": true, 00:10:45.221 "write": true, 00:10:45.221 "unmap": true, 00:10:45.221 "flush": true, 00:10:45.221 "reset": true, 00:10:45.221 "nvme_admin": false, 00:10:45.221 "nvme_io": false, 00:10:45.221 "nvme_io_md": false, 00:10:45.221 "write_zeroes": true, 00:10:45.221 "zcopy": false, 00:10:45.221 "get_zone_info": false, 00:10:45.221 "zone_management": false, 00:10:45.221 "zone_append": false, 00:10:45.221 "compare": false, 00:10:45.221 "compare_and_write": false, 00:10:45.221 "abort": false, 00:10:45.221 "seek_hole": false, 00:10:45.221 "seek_data": false, 00:10:45.221 "copy": false, 00:10:45.221 "nvme_iov_md": false 00:10:45.221 }, 00:10:45.221 "memory_domains": [ 00:10:45.221 { 00:10:45.221 "dma_device_id": "system", 00:10:45.221 "dma_device_type": 1 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.221 "dma_device_type": 2 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "dma_device_id": "system", 00:10:45.221 "dma_device_type": 1 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.221 "dma_device_type": 2 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "dma_device_id": "system", 00:10:45.221 "dma_device_type": 1 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.221 "dma_device_type": 2 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "dma_device_id": "system", 00:10:45.221 "dma_device_type": 1 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.221 "dma_device_type": 2 00:10:45.221 } 00:10:45.221 ], 00:10:45.221 "driver_specific": { 00:10:45.221 "raid": { 00:10:45.221 "uuid": "6b5aeb13-39e7-4204-997a-d410384bbc2b", 00:10:45.221 "strip_size_kb": 64, 00:10:45.221 "state": "online", 00:10:45.221 "raid_level": "raid0", 00:10:45.221 "superblock": false, 00:10:45.221 "num_base_bdevs": 4, 00:10:45.221 "num_base_bdevs_discovered": 4, 00:10:45.221 "num_base_bdevs_operational": 4, 00:10:45.221 "base_bdevs_list": [ 00:10:45.221 { 00:10:45.221 "name": "BaseBdev1", 00:10:45.221 "uuid": "769656f6-5a95-4a14-89eb-ddba9d5862c5", 00:10:45.221 "is_configured": true, 00:10:45.221 "data_offset": 0, 00:10:45.221 "data_size": 65536 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "name": "BaseBdev2", 00:10:45.221 "uuid": "55f0cb47-f330-44d7-b3e4-f722d243348a", 00:10:45.221 "is_configured": true, 00:10:45.221 "data_offset": 0, 00:10:45.221 "data_size": 65536 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "name": "BaseBdev3", 00:10:45.221 "uuid": "03eb80c2-109f-4d41-ab0c-e6c308f382a3", 00:10:45.221 "is_configured": true, 00:10:45.221 "data_offset": 0, 00:10:45.221 "data_size": 65536 00:10:45.221 }, 00:10:45.221 { 00:10:45.221 "name": "BaseBdev4", 00:10:45.221 "uuid": "2f2ed079-2485-42a0-b21b-b12662ffdef8", 00:10:45.221 "is_configured": true, 00:10:45.221 "data_offset": 0, 00:10:45.221 "data_size": 65536 00:10:45.221 } 00:10:45.221 ] 00:10:45.221 } 00:10:45.221 } 00:10:45.221 }' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.221 BaseBdev2 00:10:45.221 BaseBdev3 00:10:45.221 BaseBdev4' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.221 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.481 16:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.481 [2024-09-28 16:11:59.965217] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.481 [2024-09-28 16:11:59.965306] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.481 [2024-09-28 16:11:59.965380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.481 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.481 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.481 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:45.481 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.482 "name": "Existed_Raid", 00:10:45.482 "uuid": "6b5aeb13-39e7-4204-997a-d410384bbc2b", 00:10:45.482 "strip_size_kb": 64, 00:10:45.482 "state": "offline", 00:10:45.482 "raid_level": "raid0", 00:10:45.482 "superblock": false, 00:10:45.482 "num_base_bdevs": 4, 00:10:45.482 "num_base_bdevs_discovered": 3, 00:10:45.482 "num_base_bdevs_operational": 3, 00:10:45.482 "base_bdevs_list": [ 00:10:45.482 { 00:10:45.482 "name": null, 00:10:45.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.482 "is_configured": false, 00:10:45.482 "data_offset": 0, 00:10:45.482 "data_size": 65536 00:10:45.482 }, 00:10:45.482 { 00:10:45.482 "name": "BaseBdev2", 00:10:45.482 "uuid": "55f0cb47-f330-44d7-b3e4-f722d243348a", 00:10:45.482 "is_configured": true, 00:10:45.482 "data_offset": 0, 00:10:45.482 "data_size": 65536 00:10:45.482 }, 00:10:45.482 { 00:10:45.482 "name": "BaseBdev3", 00:10:45.482 "uuid": "03eb80c2-109f-4d41-ab0c-e6c308f382a3", 00:10:45.482 "is_configured": true, 00:10:45.482 "data_offset": 0, 00:10:45.482 "data_size": 65536 00:10:45.482 }, 00:10:45.482 { 00:10:45.482 "name": "BaseBdev4", 00:10:45.482 "uuid": "2f2ed079-2485-42a0-b21b-b12662ffdef8", 00:10:45.482 "is_configured": true, 00:10:45.482 "data_offset": 0, 00:10:45.482 "data_size": 65536 00:10:45.482 } 00:10:45.482 ] 00:10:45.482 }' 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.482 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.051 [2024-09-28 16:12:00.577538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.051 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.051 [2024-09-28 16:12:00.730795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.310 [2024-09-28 16:12:00.883799] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:46.310 [2024-09-28 16:12:00.883934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.310 16:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.570 16:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.570 BaseBdev2 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.570 [ 00:10:46.570 { 00:10:46.570 "name": "BaseBdev2", 00:10:46.570 "aliases": [ 00:10:46.570 "f4c89589-1837-4051-a8d1-0fdbf224b42c" 00:10:46.570 ], 00:10:46.570 "product_name": "Malloc disk", 00:10:46.570 "block_size": 512, 00:10:46.570 "num_blocks": 65536, 00:10:46.570 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:46.570 "assigned_rate_limits": { 00:10:46.570 "rw_ios_per_sec": 0, 00:10:46.570 "rw_mbytes_per_sec": 0, 00:10:46.570 "r_mbytes_per_sec": 0, 00:10:46.570 "w_mbytes_per_sec": 0 00:10:46.570 }, 00:10:46.570 "claimed": false, 00:10:46.570 "zoned": false, 00:10:46.570 "supported_io_types": { 00:10:46.570 "read": true, 00:10:46.570 "write": true, 00:10:46.570 "unmap": true, 00:10:46.570 "flush": true, 00:10:46.570 "reset": true, 00:10:46.570 "nvme_admin": false, 00:10:46.570 "nvme_io": false, 00:10:46.570 "nvme_io_md": false, 00:10:46.570 "write_zeroes": true, 00:10:46.570 "zcopy": true, 00:10:46.570 "get_zone_info": false, 00:10:46.570 "zone_management": false, 00:10:46.570 "zone_append": false, 00:10:46.570 "compare": false, 00:10:46.570 "compare_and_write": false, 00:10:46.570 "abort": true, 00:10:46.570 "seek_hole": false, 00:10:46.570 "seek_data": false, 00:10:46.570 "copy": true, 00:10:46.570 "nvme_iov_md": false 00:10:46.570 }, 00:10:46.570 "memory_domains": [ 00:10:46.570 { 00:10:46.570 "dma_device_id": "system", 00:10:46.570 "dma_device_type": 1 00:10:46.570 }, 00:10:46.570 { 00:10:46.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.570 "dma_device_type": 2 00:10:46.570 } 00:10:46.570 ], 00:10:46.570 "driver_specific": {} 00:10:46.570 } 00:10:46.570 ] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.570 BaseBdev3 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.570 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.570 [ 00:10:46.570 { 00:10:46.570 "name": "BaseBdev3", 00:10:46.570 "aliases": [ 00:10:46.570 "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51" 00:10:46.570 ], 00:10:46.570 "product_name": "Malloc disk", 00:10:46.570 "block_size": 512, 00:10:46.570 "num_blocks": 65536, 00:10:46.570 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:46.570 "assigned_rate_limits": { 00:10:46.570 "rw_ios_per_sec": 0, 00:10:46.570 "rw_mbytes_per_sec": 0, 00:10:46.570 "r_mbytes_per_sec": 0, 00:10:46.570 "w_mbytes_per_sec": 0 00:10:46.570 }, 00:10:46.570 "claimed": false, 00:10:46.570 "zoned": false, 00:10:46.570 "supported_io_types": { 00:10:46.570 "read": true, 00:10:46.570 "write": true, 00:10:46.570 "unmap": true, 00:10:46.570 "flush": true, 00:10:46.570 "reset": true, 00:10:46.570 "nvme_admin": false, 00:10:46.570 "nvme_io": false, 00:10:46.570 "nvme_io_md": false, 00:10:46.570 "write_zeroes": true, 00:10:46.570 "zcopy": true, 00:10:46.570 "get_zone_info": false, 00:10:46.570 "zone_management": false, 00:10:46.570 "zone_append": false, 00:10:46.570 "compare": false, 00:10:46.570 "compare_and_write": false, 00:10:46.571 "abort": true, 00:10:46.571 "seek_hole": false, 00:10:46.571 "seek_data": false, 00:10:46.571 "copy": true, 00:10:46.571 "nvme_iov_md": false 00:10:46.571 }, 00:10:46.571 "memory_domains": [ 00:10:46.571 { 00:10:46.571 "dma_device_id": "system", 00:10:46.571 "dma_device_type": 1 00:10:46.571 }, 00:10:46.571 { 00:10:46.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.571 "dma_device_type": 2 00:10:46.571 } 00:10:46.571 ], 00:10:46.571 "driver_specific": {} 00:10:46.571 } 00:10:46.571 ] 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.571 BaseBdev4 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.571 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 [ 00:10:46.829 { 00:10:46.829 "name": "BaseBdev4", 00:10:46.829 "aliases": [ 00:10:46.829 "7f5c848b-0bfc-4f20-8cd2-c04e103533d4" 00:10:46.829 ], 00:10:46.829 "product_name": "Malloc disk", 00:10:46.829 "block_size": 512, 00:10:46.829 "num_blocks": 65536, 00:10:46.829 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:46.829 "assigned_rate_limits": { 00:10:46.829 "rw_ios_per_sec": 0, 00:10:46.829 "rw_mbytes_per_sec": 0, 00:10:46.829 "r_mbytes_per_sec": 0, 00:10:46.829 "w_mbytes_per_sec": 0 00:10:46.829 }, 00:10:46.829 "claimed": false, 00:10:46.829 "zoned": false, 00:10:46.829 "supported_io_types": { 00:10:46.829 "read": true, 00:10:46.829 "write": true, 00:10:46.829 "unmap": true, 00:10:46.829 "flush": true, 00:10:46.829 "reset": true, 00:10:46.829 "nvme_admin": false, 00:10:46.829 "nvme_io": false, 00:10:46.829 "nvme_io_md": false, 00:10:46.829 "write_zeroes": true, 00:10:46.829 "zcopy": true, 00:10:46.829 "get_zone_info": false, 00:10:46.829 "zone_management": false, 00:10:46.829 "zone_append": false, 00:10:46.829 "compare": false, 00:10:46.829 "compare_and_write": false, 00:10:46.829 "abort": true, 00:10:46.829 "seek_hole": false, 00:10:46.829 "seek_data": false, 00:10:46.829 "copy": true, 00:10:46.829 "nvme_iov_md": false 00:10:46.829 }, 00:10:46.829 "memory_domains": [ 00:10:46.829 { 00:10:46.829 "dma_device_id": "system", 00:10:46.829 "dma_device_type": 1 00:10:46.829 }, 00:10:46.829 { 00:10:46.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.829 "dma_device_type": 2 00:10:46.829 } 00:10:46.829 ], 00:10:46.829 "driver_specific": {} 00:10:46.829 } 00:10:46.829 ] 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 [2024-09-28 16:12:01.297596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.829 [2024-09-28 16:12:01.297717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.829 [2024-09-28 16:12:01.297776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.829 [2024-09-28 16:12:01.299874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.829 [2024-09-28 16:12:01.299967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.829 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.829 "name": "Existed_Raid", 00:10:46.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.829 "strip_size_kb": 64, 00:10:46.829 "state": "configuring", 00:10:46.829 "raid_level": "raid0", 00:10:46.829 "superblock": false, 00:10:46.829 "num_base_bdevs": 4, 00:10:46.829 "num_base_bdevs_discovered": 3, 00:10:46.829 "num_base_bdevs_operational": 4, 00:10:46.829 "base_bdevs_list": [ 00:10:46.829 { 00:10:46.829 "name": "BaseBdev1", 00:10:46.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.829 "is_configured": false, 00:10:46.829 "data_offset": 0, 00:10:46.829 "data_size": 0 00:10:46.829 }, 00:10:46.830 { 00:10:46.830 "name": "BaseBdev2", 00:10:46.830 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:46.830 "is_configured": true, 00:10:46.830 "data_offset": 0, 00:10:46.830 "data_size": 65536 00:10:46.830 }, 00:10:46.830 { 00:10:46.830 "name": "BaseBdev3", 00:10:46.830 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:46.830 "is_configured": true, 00:10:46.830 "data_offset": 0, 00:10:46.830 "data_size": 65536 00:10:46.830 }, 00:10:46.830 { 00:10:46.830 "name": "BaseBdev4", 00:10:46.830 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:46.830 "is_configured": true, 00:10:46.830 "data_offset": 0, 00:10:46.830 "data_size": 65536 00:10:46.830 } 00:10:46.830 ] 00:10:46.830 }' 00:10:46.830 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.830 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.088 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:47.088 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.088 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.088 [2024-09-28 16:12:01.768782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.347 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.348 "name": "Existed_Raid", 00:10:47.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.348 "strip_size_kb": 64, 00:10:47.348 "state": "configuring", 00:10:47.348 "raid_level": "raid0", 00:10:47.348 "superblock": false, 00:10:47.348 "num_base_bdevs": 4, 00:10:47.348 "num_base_bdevs_discovered": 2, 00:10:47.348 "num_base_bdevs_operational": 4, 00:10:47.348 "base_bdevs_list": [ 00:10:47.348 { 00:10:47.348 "name": "BaseBdev1", 00:10:47.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.348 "is_configured": false, 00:10:47.348 "data_offset": 0, 00:10:47.348 "data_size": 0 00:10:47.348 }, 00:10:47.348 { 00:10:47.348 "name": null, 00:10:47.348 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:47.348 "is_configured": false, 00:10:47.348 "data_offset": 0, 00:10:47.348 "data_size": 65536 00:10:47.348 }, 00:10:47.348 { 00:10:47.348 "name": "BaseBdev3", 00:10:47.348 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:47.348 "is_configured": true, 00:10:47.348 "data_offset": 0, 00:10:47.348 "data_size": 65536 00:10:47.348 }, 00:10:47.348 { 00:10:47.348 "name": "BaseBdev4", 00:10:47.348 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:47.348 "is_configured": true, 00:10:47.348 "data_offset": 0, 00:10:47.348 "data_size": 65536 00:10:47.348 } 00:10:47.348 ] 00:10:47.348 }' 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.348 16:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.607 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.867 [2024-09-28 16:12:02.300630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.867 BaseBdev1 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.867 [ 00:10:47.867 { 00:10:47.867 "name": "BaseBdev1", 00:10:47.867 "aliases": [ 00:10:47.867 "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d" 00:10:47.867 ], 00:10:47.867 "product_name": "Malloc disk", 00:10:47.867 "block_size": 512, 00:10:47.867 "num_blocks": 65536, 00:10:47.867 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:47.867 "assigned_rate_limits": { 00:10:47.867 "rw_ios_per_sec": 0, 00:10:47.867 "rw_mbytes_per_sec": 0, 00:10:47.867 "r_mbytes_per_sec": 0, 00:10:47.867 "w_mbytes_per_sec": 0 00:10:47.867 }, 00:10:47.867 "claimed": true, 00:10:47.867 "claim_type": "exclusive_write", 00:10:47.867 "zoned": false, 00:10:47.867 "supported_io_types": { 00:10:47.867 "read": true, 00:10:47.867 "write": true, 00:10:47.867 "unmap": true, 00:10:47.867 "flush": true, 00:10:47.867 "reset": true, 00:10:47.867 "nvme_admin": false, 00:10:47.867 "nvme_io": false, 00:10:47.867 "nvme_io_md": false, 00:10:47.867 "write_zeroes": true, 00:10:47.867 "zcopy": true, 00:10:47.867 "get_zone_info": false, 00:10:47.867 "zone_management": false, 00:10:47.867 "zone_append": false, 00:10:47.867 "compare": false, 00:10:47.867 "compare_and_write": false, 00:10:47.867 "abort": true, 00:10:47.867 "seek_hole": false, 00:10:47.867 "seek_data": false, 00:10:47.867 "copy": true, 00:10:47.867 "nvme_iov_md": false 00:10:47.867 }, 00:10:47.867 "memory_domains": [ 00:10:47.867 { 00:10:47.867 "dma_device_id": "system", 00:10:47.867 "dma_device_type": 1 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.867 "dma_device_type": 2 00:10:47.867 } 00:10:47.867 ], 00:10:47.867 "driver_specific": {} 00:10:47.867 } 00:10:47.867 ] 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.867 "name": "Existed_Raid", 00:10:47.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.867 "strip_size_kb": 64, 00:10:47.867 "state": "configuring", 00:10:47.867 "raid_level": "raid0", 00:10:47.867 "superblock": false, 00:10:47.867 "num_base_bdevs": 4, 00:10:47.867 "num_base_bdevs_discovered": 3, 00:10:47.867 "num_base_bdevs_operational": 4, 00:10:47.867 "base_bdevs_list": [ 00:10:47.867 { 00:10:47.867 "name": "BaseBdev1", 00:10:47.867 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:47.867 "is_configured": true, 00:10:47.867 "data_offset": 0, 00:10:47.867 "data_size": 65536 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "name": null, 00:10:47.867 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:47.867 "is_configured": false, 00:10:47.867 "data_offset": 0, 00:10:47.867 "data_size": 65536 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "name": "BaseBdev3", 00:10:47.867 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:47.867 "is_configured": true, 00:10:47.867 "data_offset": 0, 00:10:47.867 "data_size": 65536 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "name": "BaseBdev4", 00:10:47.867 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:47.867 "is_configured": true, 00:10:47.867 "data_offset": 0, 00:10:47.867 "data_size": 65536 00:10:47.867 } 00:10:47.867 ] 00:10:47.867 }' 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.867 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.436 [2024-09-28 16:12:02.867728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.436 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.437 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.437 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.437 "name": "Existed_Raid", 00:10:48.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.437 "strip_size_kb": 64, 00:10:48.437 "state": "configuring", 00:10:48.437 "raid_level": "raid0", 00:10:48.437 "superblock": false, 00:10:48.437 "num_base_bdevs": 4, 00:10:48.437 "num_base_bdevs_discovered": 2, 00:10:48.437 "num_base_bdevs_operational": 4, 00:10:48.437 "base_bdevs_list": [ 00:10:48.437 { 00:10:48.437 "name": "BaseBdev1", 00:10:48.437 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:48.437 "is_configured": true, 00:10:48.437 "data_offset": 0, 00:10:48.437 "data_size": 65536 00:10:48.437 }, 00:10:48.437 { 00:10:48.437 "name": null, 00:10:48.437 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:48.437 "is_configured": false, 00:10:48.437 "data_offset": 0, 00:10:48.437 "data_size": 65536 00:10:48.437 }, 00:10:48.437 { 00:10:48.437 "name": null, 00:10:48.437 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:48.437 "is_configured": false, 00:10:48.437 "data_offset": 0, 00:10:48.437 "data_size": 65536 00:10:48.437 }, 00:10:48.437 { 00:10:48.437 "name": "BaseBdev4", 00:10:48.437 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:48.437 "is_configured": true, 00:10:48.437 "data_offset": 0, 00:10:48.437 "data_size": 65536 00:10:48.437 } 00:10:48.437 ] 00:10:48.437 }' 00:10:48.437 16:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.437 16:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 [2024-09-28 16:12:03.371009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.696 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.956 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.956 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.956 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.956 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.957 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.957 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.957 "name": "Existed_Raid", 00:10:48.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.957 "strip_size_kb": 64, 00:10:48.957 "state": "configuring", 00:10:48.957 "raid_level": "raid0", 00:10:48.957 "superblock": false, 00:10:48.957 "num_base_bdevs": 4, 00:10:48.957 "num_base_bdevs_discovered": 3, 00:10:48.957 "num_base_bdevs_operational": 4, 00:10:48.957 "base_bdevs_list": [ 00:10:48.957 { 00:10:48.957 "name": "BaseBdev1", 00:10:48.957 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:48.957 "is_configured": true, 00:10:48.957 "data_offset": 0, 00:10:48.957 "data_size": 65536 00:10:48.957 }, 00:10:48.957 { 00:10:48.957 "name": null, 00:10:48.957 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:48.957 "is_configured": false, 00:10:48.957 "data_offset": 0, 00:10:48.957 "data_size": 65536 00:10:48.957 }, 00:10:48.957 { 00:10:48.957 "name": "BaseBdev3", 00:10:48.957 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:48.957 "is_configured": true, 00:10:48.957 "data_offset": 0, 00:10:48.957 "data_size": 65536 00:10:48.957 }, 00:10:48.957 { 00:10:48.957 "name": "BaseBdev4", 00:10:48.957 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:48.957 "is_configured": true, 00:10:48.957 "data_offset": 0, 00:10:48.957 "data_size": 65536 00:10:48.957 } 00:10:48.957 ] 00:10:48.957 }' 00:10:48.957 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.957 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.217 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.217 [2024-09-28 16:12:03.854150] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.476 16:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.476 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.476 "name": "Existed_Raid", 00:10:49.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.476 "strip_size_kb": 64, 00:10:49.476 "state": "configuring", 00:10:49.477 "raid_level": "raid0", 00:10:49.477 "superblock": false, 00:10:49.477 "num_base_bdevs": 4, 00:10:49.477 "num_base_bdevs_discovered": 2, 00:10:49.477 "num_base_bdevs_operational": 4, 00:10:49.477 "base_bdevs_list": [ 00:10:49.477 { 00:10:49.477 "name": null, 00:10:49.477 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:49.477 "is_configured": false, 00:10:49.477 "data_offset": 0, 00:10:49.477 "data_size": 65536 00:10:49.477 }, 00:10:49.477 { 00:10:49.477 "name": null, 00:10:49.477 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:49.477 "is_configured": false, 00:10:49.477 "data_offset": 0, 00:10:49.477 "data_size": 65536 00:10:49.477 }, 00:10:49.477 { 00:10:49.477 "name": "BaseBdev3", 00:10:49.477 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:49.477 "is_configured": true, 00:10:49.477 "data_offset": 0, 00:10:49.477 "data_size": 65536 00:10:49.477 }, 00:10:49.477 { 00:10:49.477 "name": "BaseBdev4", 00:10:49.477 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:49.477 "is_configured": true, 00:10:49.477 "data_offset": 0, 00:10:49.477 "data_size": 65536 00:10:49.477 } 00:10:49.477 ] 00:10:49.477 }' 00:10:49.477 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.477 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.737 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.737 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.737 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.737 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.997 [2024-09-28 16:12:04.472682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.997 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.997 "name": "Existed_Raid", 00:10:49.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.997 "strip_size_kb": 64, 00:10:49.997 "state": "configuring", 00:10:49.997 "raid_level": "raid0", 00:10:49.997 "superblock": false, 00:10:49.997 "num_base_bdevs": 4, 00:10:49.997 "num_base_bdevs_discovered": 3, 00:10:49.997 "num_base_bdevs_operational": 4, 00:10:49.997 "base_bdevs_list": [ 00:10:49.998 { 00:10:49.998 "name": null, 00:10:49.998 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:49.998 "is_configured": false, 00:10:49.998 "data_offset": 0, 00:10:49.998 "data_size": 65536 00:10:49.998 }, 00:10:49.998 { 00:10:49.998 "name": "BaseBdev2", 00:10:49.998 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:49.998 "is_configured": true, 00:10:49.998 "data_offset": 0, 00:10:49.998 "data_size": 65536 00:10:49.998 }, 00:10:49.998 { 00:10:49.998 "name": "BaseBdev3", 00:10:49.998 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:49.998 "is_configured": true, 00:10:49.998 "data_offset": 0, 00:10:49.998 "data_size": 65536 00:10:49.998 }, 00:10:49.998 { 00:10:49.998 "name": "BaseBdev4", 00:10:49.998 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:49.998 "is_configured": true, 00:10:49.998 "data_offset": 0, 00:10:49.998 "data_size": 65536 00:10:49.998 } 00:10:49.998 ] 00:10:49.998 }' 00:10:49.998 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.998 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.259 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.259 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.259 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.259 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.259 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0f7600b2-27dd-4889-a3e3-d2a8bbfec47d 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.519 16:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.519 [2024-09-28 16:12:05.021879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:50.519 [2024-09-28 16:12:05.021998] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.519 [2024-09-28 16:12:05.022023] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:50.519 [2024-09-28 16:12:05.022392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:50.519 [2024-09-28 16:12:05.022582] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.519 [2024-09-28 16:12:05.022623] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:50.519 [2024-09-28 16:12:05.022941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.519 NewBaseBdev 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.519 [ 00:10:50.519 { 00:10:50.519 "name": "NewBaseBdev", 00:10:50.519 "aliases": [ 00:10:50.519 "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d" 00:10:50.519 ], 00:10:50.519 "product_name": "Malloc disk", 00:10:50.519 "block_size": 512, 00:10:50.519 "num_blocks": 65536, 00:10:50.519 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:50.519 "assigned_rate_limits": { 00:10:50.519 "rw_ios_per_sec": 0, 00:10:50.519 "rw_mbytes_per_sec": 0, 00:10:50.519 "r_mbytes_per_sec": 0, 00:10:50.519 "w_mbytes_per_sec": 0 00:10:50.519 }, 00:10:50.519 "claimed": true, 00:10:50.519 "claim_type": "exclusive_write", 00:10:50.519 "zoned": false, 00:10:50.519 "supported_io_types": { 00:10:50.519 "read": true, 00:10:50.519 "write": true, 00:10:50.519 "unmap": true, 00:10:50.519 "flush": true, 00:10:50.519 "reset": true, 00:10:50.519 "nvme_admin": false, 00:10:50.519 "nvme_io": false, 00:10:50.519 "nvme_io_md": false, 00:10:50.519 "write_zeroes": true, 00:10:50.519 "zcopy": true, 00:10:50.519 "get_zone_info": false, 00:10:50.519 "zone_management": false, 00:10:50.519 "zone_append": false, 00:10:50.519 "compare": false, 00:10:50.519 "compare_and_write": false, 00:10:50.519 "abort": true, 00:10:50.519 "seek_hole": false, 00:10:50.519 "seek_data": false, 00:10:50.519 "copy": true, 00:10:50.519 "nvme_iov_md": false 00:10:50.519 }, 00:10:50.519 "memory_domains": [ 00:10:50.519 { 00:10:50.519 "dma_device_id": "system", 00:10:50.519 "dma_device_type": 1 00:10:50.519 }, 00:10:50.519 { 00:10:50.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.519 "dma_device_type": 2 00:10:50.519 } 00:10:50.519 ], 00:10:50.519 "driver_specific": {} 00:10:50.519 } 00:10:50.519 ] 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.519 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.520 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.520 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.520 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.520 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.520 "name": "Existed_Raid", 00:10:50.520 "uuid": "59fae69c-08eb-4873-a113-8ec1e265bd5a", 00:10:50.520 "strip_size_kb": 64, 00:10:50.520 "state": "online", 00:10:50.520 "raid_level": "raid0", 00:10:50.520 "superblock": false, 00:10:50.520 "num_base_bdevs": 4, 00:10:50.520 "num_base_bdevs_discovered": 4, 00:10:50.520 "num_base_bdevs_operational": 4, 00:10:50.520 "base_bdevs_list": [ 00:10:50.520 { 00:10:50.520 "name": "NewBaseBdev", 00:10:50.520 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:50.520 "is_configured": true, 00:10:50.520 "data_offset": 0, 00:10:50.520 "data_size": 65536 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "name": "BaseBdev2", 00:10:50.520 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:50.520 "is_configured": true, 00:10:50.520 "data_offset": 0, 00:10:50.520 "data_size": 65536 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "name": "BaseBdev3", 00:10:50.520 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:50.520 "is_configured": true, 00:10:50.520 "data_offset": 0, 00:10:50.520 "data_size": 65536 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "name": "BaseBdev4", 00:10:50.520 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:50.520 "is_configured": true, 00:10:50.520 "data_offset": 0, 00:10:50.520 "data_size": 65536 00:10:50.520 } 00:10:50.520 ] 00:10:50.520 }' 00:10:50.520 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.520 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 [2024-09-28 16:12:05.497499] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.114 "name": "Existed_Raid", 00:10:51.114 "aliases": [ 00:10:51.114 "59fae69c-08eb-4873-a113-8ec1e265bd5a" 00:10:51.114 ], 00:10:51.114 "product_name": "Raid Volume", 00:10:51.114 "block_size": 512, 00:10:51.114 "num_blocks": 262144, 00:10:51.114 "uuid": "59fae69c-08eb-4873-a113-8ec1e265bd5a", 00:10:51.114 "assigned_rate_limits": { 00:10:51.114 "rw_ios_per_sec": 0, 00:10:51.114 "rw_mbytes_per_sec": 0, 00:10:51.114 "r_mbytes_per_sec": 0, 00:10:51.114 "w_mbytes_per_sec": 0 00:10:51.114 }, 00:10:51.114 "claimed": false, 00:10:51.114 "zoned": false, 00:10:51.114 "supported_io_types": { 00:10:51.114 "read": true, 00:10:51.114 "write": true, 00:10:51.114 "unmap": true, 00:10:51.114 "flush": true, 00:10:51.114 "reset": true, 00:10:51.114 "nvme_admin": false, 00:10:51.114 "nvme_io": false, 00:10:51.114 "nvme_io_md": false, 00:10:51.114 "write_zeroes": true, 00:10:51.114 "zcopy": false, 00:10:51.114 "get_zone_info": false, 00:10:51.114 "zone_management": false, 00:10:51.114 "zone_append": false, 00:10:51.114 "compare": false, 00:10:51.114 "compare_and_write": false, 00:10:51.114 "abort": false, 00:10:51.114 "seek_hole": false, 00:10:51.114 "seek_data": false, 00:10:51.114 "copy": false, 00:10:51.114 "nvme_iov_md": false 00:10:51.114 }, 00:10:51.114 "memory_domains": [ 00:10:51.114 { 00:10:51.114 "dma_device_id": "system", 00:10:51.114 "dma_device_type": 1 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.114 "dma_device_type": 2 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "dma_device_id": "system", 00:10:51.114 "dma_device_type": 1 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.114 "dma_device_type": 2 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "dma_device_id": "system", 00:10:51.114 "dma_device_type": 1 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.114 "dma_device_type": 2 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "dma_device_id": "system", 00:10:51.114 "dma_device_type": 1 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.114 "dma_device_type": 2 00:10:51.114 } 00:10:51.114 ], 00:10:51.114 "driver_specific": { 00:10:51.114 "raid": { 00:10:51.114 "uuid": "59fae69c-08eb-4873-a113-8ec1e265bd5a", 00:10:51.114 "strip_size_kb": 64, 00:10:51.114 "state": "online", 00:10:51.114 "raid_level": "raid0", 00:10:51.114 "superblock": false, 00:10:51.114 "num_base_bdevs": 4, 00:10:51.114 "num_base_bdevs_discovered": 4, 00:10:51.114 "num_base_bdevs_operational": 4, 00:10:51.114 "base_bdevs_list": [ 00:10:51.114 { 00:10:51.114 "name": "NewBaseBdev", 00:10:51.114 "uuid": "0f7600b2-27dd-4889-a3e3-d2a8bbfec47d", 00:10:51.114 "is_configured": true, 00:10:51.114 "data_offset": 0, 00:10:51.114 "data_size": 65536 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "name": "BaseBdev2", 00:10:51.114 "uuid": "f4c89589-1837-4051-a8d1-0fdbf224b42c", 00:10:51.114 "is_configured": true, 00:10:51.114 "data_offset": 0, 00:10:51.114 "data_size": 65536 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "name": "BaseBdev3", 00:10:51.114 "uuid": "cc1f5c21-4695-4a9b-9c1b-cf7edc53ef51", 00:10:51.114 "is_configured": true, 00:10:51.114 "data_offset": 0, 00:10:51.114 "data_size": 65536 00:10:51.114 }, 00:10:51.114 { 00:10:51.114 "name": "BaseBdev4", 00:10:51.114 "uuid": "7f5c848b-0bfc-4f20-8cd2-c04e103533d4", 00:10:51.114 "is_configured": true, 00:10:51.114 "data_offset": 0, 00:10:51.114 "data_size": 65536 00:10:51.114 } 00:10:51.114 ] 00:10:51.114 } 00:10:51.114 } 00:10:51.114 }' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:51.114 BaseBdev2 00:10:51.114 BaseBdev3 00:10:51.114 BaseBdev4' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.114 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.115 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.115 [2024-09-28 16:12:05.792565] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.115 [2024-09-28 16:12:05.792635] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.115 [2024-09-28 16:12:05.792729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.115 [2024-09-28 16:12:05.792838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.115 [2024-09-28 16:12:05.792886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69405 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69405 ']' 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69405 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69405 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69405' 00:10:51.376 killing process with pid 69405 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69405 00:10:51.376 [2024-09-28 16:12:05.844820] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.376 16:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69405 00:10:51.635 [2024-09-28 16:12:06.246906] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:53.015 00:10:53.015 real 0m12.068s 00:10:53.015 user 0m18.931s 00:10:53.015 sys 0m2.316s 00:10:53.015 ************************************ 00:10:53.015 END TEST raid_state_function_test 00:10:53.015 ************************************ 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.015 16:12:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:53.015 16:12:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.015 16:12:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.015 16:12:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.015 ************************************ 00:10:53.015 START TEST raid_state_function_test_sb 00:10:53.015 ************************************ 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:53.015 Process raid pid: 70086 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70086 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70086' 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70086 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70086 ']' 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.015 16:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.274 [2024-09-28 16:12:07.746973] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:53.274 [2024-09-28 16:12:07.747101] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.274 [2024-09-28 16:12:07.912998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.534 [2024-09-28 16:12:08.156885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.793 [2024-09-28 16:12:08.388536] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.793 [2024-09-28 16:12:08.388573] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.053 [2024-09-28 16:12:08.583603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.053 [2024-09-28 16:12:08.583735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.053 [2024-09-28 16:12:08.583770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.053 [2024-09-28 16:12:08.583796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.053 [2024-09-28 16:12:08.583815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.053 [2024-09-28 16:12:08.583839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.053 [2024-09-28 16:12:08.583858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.053 [2024-09-28 16:12:08.583887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.053 "name": "Existed_Raid", 00:10:54.053 "uuid": "890b1331-9dd6-42cc-9d6a-23f4b7761980", 00:10:54.053 "strip_size_kb": 64, 00:10:54.053 "state": "configuring", 00:10:54.053 "raid_level": "raid0", 00:10:54.053 "superblock": true, 00:10:54.053 "num_base_bdevs": 4, 00:10:54.053 "num_base_bdevs_discovered": 0, 00:10:54.053 "num_base_bdevs_operational": 4, 00:10:54.053 "base_bdevs_list": [ 00:10:54.053 { 00:10:54.053 "name": "BaseBdev1", 00:10:54.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.053 "is_configured": false, 00:10:54.053 "data_offset": 0, 00:10:54.053 "data_size": 0 00:10:54.053 }, 00:10:54.053 { 00:10:54.053 "name": "BaseBdev2", 00:10:54.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.053 "is_configured": false, 00:10:54.053 "data_offset": 0, 00:10:54.053 "data_size": 0 00:10:54.053 }, 00:10:54.053 { 00:10:54.053 "name": "BaseBdev3", 00:10:54.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.053 "is_configured": false, 00:10:54.053 "data_offset": 0, 00:10:54.053 "data_size": 0 00:10:54.053 }, 00:10:54.053 { 00:10:54.053 "name": "BaseBdev4", 00:10:54.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.053 "is_configured": false, 00:10:54.053 "data_offset": 0, 00:10:54.053 "data_size": 0 00:10:54.053 } 00:10:54.053 ] 00:10:54.053 }' 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.053 16:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.622 [2024-09-28 16:12:09.014795] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.622 [2024-09-28 16:12:09.014885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.622 [2024-09-28 16:12:09.026807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.622 [2024-09-28 16:12:09.026890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.622 [2024-09-28 16:12:09.026934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.622 [2024-09-28 16:12:09.026957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.622 [2024-09-28 16:12:09.026975] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.622 [2024-09-28 16:12:09.026996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.622 [2024-09-28 16:12:09.027013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.622 [2024-09-28 16:12:09.027034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.622 [2024-09-28 16:12:09.085838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.622 BaseBdev1 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.622 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.622 [ 00:10:54.622 { 00:10:54.622 "name": "BaseBdev1", 00:10:54.622 "aliases": [ 00:10:54.622 "b0f4bd0d-0419-4656-ae0e-a1d9819cca54" 00:10:54.622 ], 00:10:54.622 "product_name": "Malloc disk", 00:10:54.622 "block_size": 512, 00:10:54.622 "num_blocks": 65536, 00:10:54.622 "uuid": "b0f4bd0d-0419-4656-ae0e-a1d9819cca54", 00:10:54.622 "assigned_rate_limits": { 00:10:54.622 "rw_ios_per_sec": 0, 00:10:54.622 "rw_mbytes_per_sec": 0, 00:10:54.622 "r_mbytes_per_sec": 0, 00:10:54.622 "w_mbytes_per_sec": 0 00:10:54.622 }, 00:10:54.622 "claimed": true, 00:10:54.622 "claim_type": "exclusive_write", 00:10:54.622 "zoned": false, 00:10:54.622 "supported_io_types": { 00:10:54.622 "read": true, 00:10:54.622 "write": true, 00:10:54.622 "unmap": true, 00:10:54.622 "flush": true, 00:10:54.622 "reset": true, 00:10:54.622 "nvme_admin": false, 00:10:54.622 "nvme_io": false, 00:10:54.622 "nvme_io_md": false, 00:10:54.622 "write_zeroes": true, 00:10:54.622 "zcopy": true, 00:10:54.623 "get_zone_info": false, 00:10:54.623 "zone_management": false, 00:10:54.623 "zone_append": false, 00:10:54.623 "compare": false, 00:10:54.623 "compare_and_write": false, 00:10:54.623 "abort": true, 00:10:54.623 "seek_hole": false, 00:10:54.623 "seek_data": false, 00:10:54.623 "copy": true, 00:10:54.623 "nvme_iov_md": false 00:10:54.623 }, 00:10:54.623 "memory_domains": [ 00:10:54.623 { 00:10:54.623 "dma_device_id": "system", 00:10:54.623 "dma_device_type": 1 00:10:54.623 }, 00:10:54.623 { 00:10:54.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.623 "dma_device_type": 2 00:10:54.623 } 00:10:54.623 ], 00:10:54.623 "driver_specific": {} 00:10:54.623 } 00:10:54.623 ] 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.623 "name": "Existed_Raid", 00:10:54.623 "uuid": "9e3452f8-db3b-4d24-bbab-92466b64bc17", 00:10:54.623 "strip_size_kb": 64, 00:10:54.623 "state": "configuring", 00:10:54.623 "raid_level": "raid0", 00:10:54.623 "superblock": true, 00:10:54.623 "num_base_bdevs": 4, 00:10:54.623 "num_base_bdevs_discovered": 1, 00:10:54.623 "num_base_bdevs_operational": 4, 00:10:54.623 "base_bdevs_list": [ 00:10:54.623 { 00:10:54.623 "name": "BaseBdev1", 00:10:54.623 "uuid": "b0f4bd0d-0419-4656-ae0e-a1d9819cca54", 00:10:54.623 "is_configured": true, 00:10:54.623 "data_offset": 2048, 00:10:54.623 "data_size": 63488 00:10:54.623 }, 00:10:54.623 { 00:10:54.623 "name": "BaseBdev2", 00:10:54.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.623 "is_configured": false, 00:10:54.623 "data_offset": 0, 00:10:54.623 "data_size": 0 00:10:54.623 }, 00:10:54.623 { 00:10:54.623 "name": "BaseBdev3", 00:10:54.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.623 "is_configured": false, 00:10:54.623 "data_offset": 0, 00:10:54.623 "data_size": 0 00:10:54.623 }, 00:10:54.623 { 00:10:54.623 "name": "BaseBdev4", 00:10:54.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.623 "is_configured": false, 00:10:54.623 "data_offset": 0, 00:10:54.623 "data_size": 0 00:10:54.623 } 00:10:54.623 ] 00:10:54.623 }' 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.623 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.882 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.882 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.883 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.883 [2024-09-28 16:12:09.561008] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.883 [2024-09-28 16:12:09.561111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:54.883 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.883 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.883 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.142 [2024-09-28 16:12:09.573048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.142 [2024-09-28 16:12:09.575274] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.142 [2024-09-28 16:12:09.575351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.142 [2024-09-28 16:12:09.575380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.142 [2024-09-28 16:12:09.575405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.142 [2024-09-28 16:12:09.575423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.142 [2024-09-28 16:12:09.575443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.142 "name": "Existed_Raid", 00:10:55.142 "uuid": "feb9cf44-43e8-424e-b27a-702c1a9f125c", 00:10:55.142 "strip_size_kb": 64, 00:10:55.142 "state": "configuring", 00:10:55.142 "raid_level": "raid0", 00:10:55.142 "superblock": true, 00:10:55.142 "num_base_bdevs": 4, 00:10:55.142 "num_base_bdevs_discovered": 1, 00:10:55.142 "num_base_bdevs_operational": 4, 00:10:55.142 "base_bdevs_list": [ 00:10:55.142 { 00:10:55.142 "name": "BaseBdev1", 00:10:55.142 "uuid": "b0f4bd0d-0419-4656-ae0e-a1d9819cca54", 00:10:55.142 "is_configured": true, 00:10:55.142 "data_offset": 2048, 00:10:55.142 "data_size": 63488 00:10:55.142 }, 00:10:55.142 { 00:10:55.142 "name": "BaseBdev2", 00:10:55.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.142 "is_configured": false, 00:10:55.142 "data_offset": 0, 00:10:55.142 "data_size": 0 00:10:55.142 }, 00:10:55.142 { 00:10:55.142 "name": "BaseBdev3", 00:10:55.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.142 "is_configured": false, 00:10:55.142 "data_offset": 0, 00:10:55.142 "data_size": 0 00:10:55.142 }, 00:10:55.142 { 00:10:55.142 "name": "BaseBdev4", 00:10:55.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.142 "is_configured": false, 00:10:55.142 "data_offset": 0, 00:10:55.142 "data_size": 0 00:10:55.142 } 00:10:55.142 ] 00:10:55.142 }' 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.142 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.401 16:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.401 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.401 16:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.402 [2024-09-28 16:12:10.029219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.402 BaseBdev2 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.402 [ 00:10:55.402 { 00:10:55.402 "name": "BaseBdev2", 00:10:55.402 "aliases": [ 00:10:55.402 "5107d9b5-25ad-431c-87cf-865b3c76cc7e" 00:10:55.402 ], 00:10:55.402 "product_name": "Malloc disk", 00:10:55.402 "block_size": 512, 00:10:55.402 "num_blocks": 65536, 00:10:55.402 "uuid": "5107d9b5-25ad-431c-87cf-865b3c76cc7e", 00:10:55.402 "assigned_rate_limits": { 00:10:55.402 "rw_ios_per_sec": 0, 00:10:55.402 "rw_mbytes_per_sec": 0, 00:10:55.402 "r_mbytes_per_sec": 0, 00:10:55.402 "w_mbytes_per_sec": 0 00:10:55.402 }, 00:10:55.402 "claimed": true, 00:10:55.402 "claim_type": "exclusive_write", 00:10:55.402 "zoned": false, 00:10:55.402 "supported_io_types": { 00:10:55.402 "read": true, 00:10:55.402 "write": true, 00:10:55.402 "unmap": true, 00:10:55.402 "flush": true, 00:10:55.402 "reset": true, 00:10:55.402 "nvme_admin": false, 00:10:55.402 "nvme_io": false, 00:10:55.402 "nvme_io_md": false, 00:10:55.402 "write_zeroes": true, 00:10:55.402 "zcopy": true, 00:10:55.402 "get_zone_info": false, 00:10:55.402 "zone_management": false, 00:10:55.402 "zone_append": false, 00:10:55.402 "compare": false, 00:10:55.402 "compare_and_write": false, 00:10:55.402 "abort": true, 00:10:55.402 "seek_hole": false, 00:10:55.402 "seek_data": false, 00:10:55.402 "copy": true, 00:10:55.402 "nvme_iov_md": false 00:10:55.402 }, 00:10:55.402 "memory_domains": [ 00:10:55.402 { 00:10:55.402 "dma_device_id": "system", 00:10:55.402 "dma_device_type": 1 00:10:55.402 }, 00:10:55.402 { 00:10:55.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.402 "dma_device_type": 2 00:10:55.402 } 00:10:55.402 ], 00:10:55.402 "driver_specific": {} 00:10:55.402 } 00:10:55.402 ] 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.402 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.661 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.661 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.661 "name": "Existed_Raid", 00:10:55.661 "uuid": "feb9cf44-43e8-424e-b27a-702c1a9f125c", 00:10:55.661 "strip_size_kb": 64, 00:10:55.661 "state": "configuring", 00:10:55.661 "raid_level": "raid0", 00:10:55.661 "superblock": true, 00:10:55.661 "num_base_bdevs": 4, 00:10:55.661 "num_base_bdevs_discovered": 2, 00:10:55.661 "num_base_bdevs_operational": 4, 00:10:55.661 "base_bdevs_list": [ 00:10:55.661 { 00:10:55.661 "name": "BaseBdev1", 00:10:55.661 "uuid": "b0f4bd0d-0419-4656-ae0e-a1d9819cca54", 00:10:55.661 "is_configured": true, 00:10:55.661 "data_offset": 2048, 00:10:55.661 "data_size": 63488 00:10:55.661 }, 00:10:55.661 { 00:10:55.661 "name": "BaseBdev2", 00:10:55.661 "uuid": "5107d9b5-25ad-431c-87cf-865b3c76cc7e", 00:10:55.661 "is_configured": true, 00:10:55.661 "data_offset": 2048, 00:10:55.661 "data_size": 63488 00:10:55.661 }, 00:10:55.661 { 00:10:55.661 "name": "BaseBdev3", 00:10:55.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.661 "is_configured": false, 00:10:55.661 "data_offset": 0, 00:10:55.661 "data_size": 0 00:10:55.661 }, 00:10:55.661 { 00:10:55.661 "name": "BaseBdev4", 00:10:55.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.661 "is_configured": false, 00:10:55.661 "data_offset": 0, 00:10:55.661 "data_size": 0 00:10:55.661 } 00:10:55.661 ] 00:10:55.661 }' 00:10:55.661 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.661 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.920 [2024-09-28 16:12:10.556833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.920 BaseBdev3 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.920 [ 00:10:55.920 { 00:10:55.920 "name": "BaseBdev3", 00:10:55.920 "aliases": [ 00:10:55.920 "c5eee623-0b57-40c1-a970-269d13e1dec0" 00:10:55.920 ], 00:10:55.920 "product_name": "Malloc disk", 00:10:55.920 "block_size": 512, 00:10:55.920 "num_blocks": 65536, 00:10:55.920 "uuid": "c5eee623-0b57-40c1-a970-269d13e1dec0", 00:10:55.920 "assigned_rate_limits": { 00:10:55.920 "rw_ios_per_sec": 0, 00:10:55.920 "rw_mbytes_per_sec": 0, 00:10:55.920 "r_mbytes_per_sec": 0, 00:10:55.920 "w_mbytes_per_sec": 0 00:10:55.920 }, 00:10:55.920 "claimed": true, 00:10:55.920 "claim_type": "exclusive_write", 00:10:55.920 "zoned": false, 00:10:55.920 "supported_io_types": { 00:10:55.920 "read": true, 00:10:55.920 "write": true, 00:10:55.920 "unmap": true, 00:10:55.920 "flush": true, 00:10:55.920 "reset": true, 00:10:55.920 "nvme_admin": false, 00:10:55.920 "nvme_io": false, 00:10:55.920 "nvme_io_md": false, 00:10:55.920 "write_zeroes": true, 00:10:55.920 "zcopy": true, 00:10:55.920 "get_zone_info": false, 00:10:55.920 "zone_management": false, 00:10:55.920 "zone_append": false, 00:10:55.920 "compare": false, 00:10:55.920 "compare_and_write": false, 00:10:55.920 "abort": true, 00:10:55.920 "seek_hole": false, 00:10:55.920 "seek_data": false, 00:10:55.920 "copy": true, 00:10:55.920 "nvme_iov_md": false 00:10:55.920 }, 00:10:55.920 "memory_domains": [ 00:10:55.920 { 00:10:55.920 "dma_device_id": "system", 00:10:55.920 "dma_device_type": 1 00:10:55.920 }, 00:10:55.920 { 00:10:55.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.920 "dma_device_type": 2 00:10:55.920 } 00:10:55.920 ], 00:10:55.920 "driver_specific": {} 00:10:55.920 } 00:10:55.920 ] 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.920 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.180 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.180 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.180 "name": "Existed_Raid", 00:10:56.180 "uuid": "feb9cf44-43e8-424e-b27a-702c1a9f125c", 00:10:56.180 "strip_size_kb": 64, 00:10:56.180 "state": "configuring", 00:10:56.180 "raid_level": "raid0", 00:10:56.180 "superblock": true, 00:10:56.180 "num_base_bdevs": 4, 00:10:56.180 "num_base_bdevs_discovered": 3, 00:10:56.180 "num_base_bdevs_operational": 4, 00:10:56.180 "base_bdevs_list": [ 00:10:56.180 { 00:10:56.180 "name": "BaseBdev1", 00:10:56.180 "uuid": "b0f4bd0d-0419-4656-ae0e-a1d9819cca54", 00:10:56.180 "is_configured": true, 00:10:56.180 "data_offset": 2048, 00:10:56.180 "data_size": 63488 00:10:56.180 }, 00:10:56.180 { 00:10:56.180 "name": "BaseBdev2", 00:10:56.180 "uuid": "5107d9b5-25ad-431c-87cf-865b3c76cc7e", 00:10:56.180 "is_configured": true, 00:10:56.180 "data_offset": 2048, 00:10:56.180 "data_size": 63488 00:10:56.180 }, 00:10:56.180 { 00:10:56.180 "name": "BaseBdev3", 00:10:56.180 "uuid": "c5eee623-0b57-40c1-a970-269d13e1dec0", 00:10:56.180 "is_configured": true, 00:10:56.180 "data_offset": 2048, 00:10:56.180 "data_size": 63488 00:10:56.180 }, 00:10:56.180 { 00:10:56.180 "name": "BaseBdev4", 00:10:56.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.180 "is_configured": false, 00:10:56.180 "data_offset": 0, 00:10:56.180 "data_size": 0 00:10:56.180 } 00:10:56.180 ] 00:10:56.180 }' 00:10:56.180 16:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.180 16:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 BaseBdev4 00:10:56.439 [2024-09-28 16:12:11.075416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:56.439 [2024-09-28 16:12:11.075700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.439 [2024-09-28 16:12:11.075717] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.439 [2024-09-28 16:12:11.076020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.439 [2024-09-28 16:12:11.076180] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.439 [2024-09-28 16:12:11.076195] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:56.439 [2024-09-28 16:12:11.076383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 [ 00:10:56.439 { 00:10:56.439 "name": "BaseBdev4", 00:10:56.439 "aliases": [ 00:10:56.439 "fc0b8c3f-03bd-480c-b840-3521e84133d1" 00:10:56.439 ], 00:10:56.439 "product_name": "Malloc disk", 00:10:56.439 "block_size": 512, 00:10:56.439 "num_blocks": 65536, 00:10:56.439 "uuid": "fc0b8c3f-03bd-480c-b840-3521e84133d1", 00:10:56.439 "assigned_rate_limits": { 00:10:56.439 "rw_ios_per_sec": 0, 00:10:56.439 "rw_mbytes_per_sec": 0, 00:10:56.439 "r_mbytes_per_sec": 0, 00:10:56.439 "w_mbytes_per_sec": 0 00:10:56.439 }, 00:10:56.439 "claimed": true, 00:10:56.439 "claim_type": "exclusive_write", 00:10:56.439 "zoned": false, 00:10:56.439 "supported_io_types": { 00:10:56.439 "read": true, 00:10:56.439 "write": true, 00:10:56.439 "unmap": true, 00:10:56.439 "flush": true, 00:10:56.439 "reset": true, 00:10:56.439 "nvme_admin": false, 00:10:56.439 "nvme_io": false, 00:10:56.439 "nvme_io_md": false, 00:10:56.439 "write_zeroes": true, 00:10:56.439 "zcopy": true, 00:10:56.439 "get_zone_info": false, 00:10:56.439 "zone_management": false, 00:10:56.439 "zone_append": false, 00:10:56.439 "compare": false, 00:10:56.439 "compare_and_write": false, 00:10:56.439 "abort": true, 00:10:56.439 "seek_hole": false, 00:10:56.439 "seek_data": false, 00:10:56.439 "copy": true, 00:10:56.439 "nvme_iov_md": false 00:10:56.439 }, 00:10:56.439 "memory_domains": [ 00:10:56.439 { 00:10:56.439 "dma_device_id": "system", 00:10:56.439 "dma_device_type": 1 00:10:56.439 }, 00:10:56.439 { 00:10:56.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.439 "dma_device_type": 2 00:10:56.439 } 00:10:56.439 ], 00:10:56.439 "driver_specific": {} 00:10:56.439 } 00:10:56.439 ] 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.439 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.440 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.440 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.440 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.698 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.698 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.698 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.698 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.698 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.698 "name": "Existed_Raid", 00:10:56.698 "uuid": "feb9cf44-43e8-424e-b27a-702c1a9f125c", 00:10:56.698 "strip_size_kb": 64, 00:10:56.698 "state": "online", 00:10:56.698 "raid_level": "raid0", 00:10:56.698 "superblock": true, 00:10:56.698 "num_base_bdevs": 4, 00:10:56.698 "num_base_bdevs_discovered": 4, 00:10:56.698 "num_base_bdevs_operational": 4, 00:10:56.698 "base_bdevs_list": [ 00:10:56.698 { 00:10:56.698 "name": "BaseBdev1", 00:10:56.698 "uuid": "b0f4bd0d-0419-4656-ae0e-a1d9819cca54", 00:10:56.698 "is_configured": true, 00:10:56.698 "data_offset": 2048, 00:10:56.698 "data_size": 63488 00:10:56.698 }, 00:10:56.698 { 00:10:56.698 "name": "BaseBdev2", 00:10:56.698 "uuid": "5107d9b5-25ad-431c-87cf-865b3c76cc7e", 00:10:56.698 "is_configured": true, 00:10:56.698 "data_offset": 2048, 00:10:56.698 "data_size": 63488 00:10:56.698 }, 00:10:56.698 { 00:10:56.698 "name": "BaseBdev3", 00:10:56.698 "uuid": "c5eee623-0b57-40c1-a970-269d13e1dec0", 00:10:56.698 "is_configured": true, 00:10:56.698 "data_offset": 2048, 00:10:56.698 "data_size": 63488 00:10:56.698 }, 00:10:56.698 { 00:10:56.698 "name": "BaseBdev4", 00:10:56.698 "uuid": "fc0b8c3f-03bd-480c-b840-3521e84133d1", 00:10:56.698 "is_configured": true, 00:10:56.698 "data_offset": 2048, 00:10:56.698 "data_size": 63488 00:10:56.698 } 00:10:56.698 ] 00:10:56.698 }' 00:10:56.698 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.698 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.955 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.956 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.956 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.956 [2024-09-28 16:12:11.555003] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.956 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.956 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.956 "name": "Existed_Raid", 00:10:56.956 "aliases": [ 00:10:56.956 "feb9cf44-43e8-424e-b27a-702c1a9f125c" 00:10:56.956 ], 00:10:56.956 "product_name": "Raid Volume", 00:10:56.956 "block_size": 512, 00:10:56.956 "num_blocks": 253952, 00:10:56.956 "uuid": "feb9cf44-43e8-424e-b27a-702c1a9f125c", 00:10:56.956 "assigned_rate_limits": { 00:10:56.956 "rw_ios_per_sec": 0, 00:10:56.956 "rw_mbytes_per_sec": 0, 00:10:56.956 "r_mbytes_per_sec": 0, 00:10:56.956 "w_mbytes_per_sec": 0 00:10:56.956 }, 00:10:56.956 "claimed": false, 00:10:56.956 "zoned": false, 00:10:56.956 "supported_io_types": { 00:10:56.956 "read": true, 00:10:56.956 "write": true, 00:10:56.956 "unmap": true, 00:10:56.956 "flush": true, 00:10:56.956 "reset": true, 00:10:56.956 "nvme_admin": false, 00:10:56.956 "nvme_io": false, 00:10:56.956 "nvme_io_md": false, 00:10:56.956 "write_zeroes": true, 00:10:56.956 "zcopy": false, 00:10:56.956 "get_zone_info": false, 00:10:56.956 "zone_management": false, 00:10:56.956 "zone_append": false, 00:10:56.956 "compare": false, 00:10:56.956 "compare_and_write": false, 00:10:56.956 "abort": false, 00:10:56.956 "seek_hole": false, 00:10:56.956 "seek_data": false, 00:10:56.956 "copy": false, 00:10:56.956 "nvme_iov_md": false 00:10:56.956 }, 00:10:56.956 "memory_domains": [ 00:10:56.956 { 00:10:56.956 "dma_device_id": "system", 00:10:56.956 "dma_device_type": 1 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.956 "dma_device_type": 2 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "dma_device_id": "system", 00:10:56.956 "dma_device_type": 1 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.956 "dma_device_type": 2 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "dma_device_id": "system", 00:10:56.956 "dma_device_type": 1 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.956 "dma_device_type": 2 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "dma_device_id": "system", 00:10:56.956 "dma_device_type": 1 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.956 "dma_device_type": 2 00:10:56.956 } 00:10:56.956 ], 00:10:56.956 "driver_specific": { 00:10:56.956 "raid": { 00:10:56.956 "uuid": "feb9cf44-43e8-424e-b27a-702c1a9f125c", 00:10:56.956 "strip_size_kb": 64, 00:10:56.956 "state": "online", 00:10:56.956 "raid_level": "raid0", 00:10:56.956 "superblock": true, 00:10:56.956 "num_base_bdevs": 4, 00:10:56.956 "num_base_bdevs_discovered": 4, 00:10:56.956 "num_base_bdevs_operational": 4, 00:10:56.956 "base_bdevs_list": [ 00:10:56.956 { 00:10:56.956 "name": "BaseBdev1", 00:10:56.956 "uuid": "b0f4bd0d-0419-4656-ae0e-a1d9819cca54", 00:10:56.956 "is_configured": true, 00:10:56.956 "data_offset": 2048, 00:10:56.956 "data_size": 63488 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "name": "BaseBdev2", 00:10:56.956 "uuid": "5107d9b5-25ad-431c-87cf-865b3c76cc7e", 00:10:56.956 "is_configured": true, 00:10:56.956 "data_offset": 2048, 00:10:56.956 "data_size": 63488 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "name": "BaseBdev3", 00:10:56.956 "uuid": "c5eee623-0b57-40c1-a970-269d13e1dec0", 00:10:56.956 "is_configured": true, 00:10:56.956 "data_offset": 2048, 00:10:56.956 "data_size": 63488 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "name": "BaseBdev4", 00:10:56.956 "uuid": "fc0b8c3f-03bd-480c-b840-3521e84133d1", 00:10:56.956 "is_configured": true, 00:10:56.956 "data_offset": 2048, 00:10:56.956 "data_size": 63488 00:10:56.956 } 00:10:56.956 ] 00:10:56.956 } 00:10:56.956 } 00:10:56.956 }' 00:10:56.956 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.956 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.956 BaseBdev2 00:10:56.956 BaseBdev3 00:10:56.956 BaseBdev4' 00:10:56.956 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.215 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.215 [2024-09-28 16:12:11.894136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.215 [2024-09-28 16:12:11.894210] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.215 [2024-09-28 16:12:11.894311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.474 16:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.474 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.474 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.474 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.474 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.475 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.475 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.475 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.475 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.475 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.475 "name": "Existed_Raid", 00:10:57.475 "uuid": "feb9cf44-43e8-424e-b27a-702c1a9f125c", 00:10:57.475 "strip_size_kb": 64, 00:10:57.475 "state": "offline", 00:10:57.475 "raid_level": "raid0", 00:10:57.475 "superblock": true, 00:10:57.475 "num_base_bdevs": 4, 00:10:57.475 "num_base_bdevs_discovered": 3, 00:10:57.475 "num_base_bdevs_operational": 3, 00:10:57.475 "base_bdevs_list": [ 00:10:57.475 { 00:10:57.475 "name": null, 00:10:57.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.475 "is_configured": false, 00:10:57.475 "data_offset": 0, 00:10:57.475 "data_size": 63488 00:10:57.475 }, 00:10:57.475 { 00:10:57.475 "name": "BaseBdev2", 00:10:57.475 "uuid": "5107d9b5-25ad-431c-87cf-865b3c76cc7e", 00:10:57.475 "is_configured": true, 00:10:57.475 "data_offset": 2048, 00:10:57.475 "data_size": 63488 00:10:57.475 }, 00:10:57.475 { 00:10:57.475 "name": "BaseBdev3", 00:10:57.475 "uuid": "c5eee623-0b57-40c1-a970-269d13e1dec0", 00:10:57.475 "is_configured": true, 00:10:57.475 "data_offset": 2048, 00:10:57.475 "data_size": 63488 00:10:57.475 }, 00:10:57.475 { 00:10:57.475 "name": "BaseBdev4", 00:10:57.475 "uuid": "fc0b8c3f-03bd-480c-b840-3521e84133d1", 00:10:57.475 "is_configured": true, 00:10:57.475 "data_offset": 2048, 00:10:57.475 "data_size": 63488 00:10:57.475 } 00:10:57.475 ] 00:10:57.475 }' 00:10:57.475 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.475 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.733 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:57.733 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.733 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.733 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.733 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.733 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.992 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.993 [2024-09-28 16:12:12.450801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.993 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.993 [2024-09-28 16:12:12.611083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.251 [2024-09-28 16:12:12.773952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:58.251 [2024-09-28 16:12:12.774053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.251 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.252 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.511 BaseBdev2 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.511 16:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.511 [ 00:10:58.511 { 00:10:58.511 "name": "BaseBdev2", 00:10:58.511 "aliases": [ 00:10:58.511 "2051587e-f169-4fd9-8e87-8aed8748584c" 00:10:58.511 ], 00:10:58.511 "product_name": "Malloc disk", 00:10:58.511 "block_size": 512, 00:10:58.511 "num_blocks": 65536, 00:10:58.511 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:10:58.511 "assigned_rate_limits": { 00:10:58.511 "rw_ios_per_sec": 0, 00:10:58.511 "rw_mbytes_per_sec": 0, 00:10:58.511 "r_mbytes_per_sec": 0, 00:10:58.511 "w_mbytes_per_sec": 0 00:10:58.511 }, 00:10:58.511 "claimed": false, 00:10:58.511 "zoned": false, 00:10:58.511 "supported_io_types": { 00:10:58.511 "read": true, 00:10:58.511 "write": true, 00:10:58.511 "unmap": true, 00:10:58.511 "flush": true, 00:10:58.511 "reset": true, 00:10:58.511 "nvme_admin": false, 00:10:58.511 "nvme_io": false, 00:10:58.511 "nvme_io_md": false, 00:10:58.511 "write_zeroes": true, 00:10:58.511 "zcopy": true, 00:10:58.511 "get_zone_info": false, 00:10:58.511 "zone_management": false, 00:10:58.511 "zone_append": false, 00:10:58.511 "compare": false, 00:10:58.511 "compare_and_write": false, 00:10:58.511 "abort": true, 00:10:58.511 "seek_hole": false, 00:10:58.511 "seek_data": false, 00:10:58.511 "copy": true, 00:10:58.511 "nvme_iov_md": false 00:10:58.511 }, 00:10:58.511 "memory_domains": [ 00:10:58.511 { 00:10:58.511 "dma_device_id": "system", 00:10:58.511 "dma_device_type": 1 00:10:58.511 }, 00:10:58.511 { 00:10:58.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.511 "dma_device_type": 2 00:10:58.511 } 00:10:58.511 ], 00:10:58.511 "driver_specific": {} 00:10:58.511 } 00:10:58.511 ] 00:10:58.511 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.511 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.511 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.511 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.512 BaseBdev3 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.512 [ 00:10:58.512 { 00:10:58.512 "name": "BaseBdev3", 00:10:58.512 "aliases": [ 00:10:58.512 "84228dbf-dd9b-4d0a-8486-a98fcb732d7f" 00:10:58.512 ], 00:10:58.512 "product_name": "Malloc disk", 00:10:58.512 "block_size": 512, 00:10:58.512 "num_blocks": 65536, 00:10:58.512 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:10:58.512 "assigned_rate_limits": { 00:10:58.512 "rw_ios_per_sec": 0, 00:10:58.512 "rw_mbytes_per_sec": 0, 00:10:58.512 "r_mbytes_per_sec": 0, 00:10:58.512 "w_mbytes_per_sec": 0 00:10:58.512 }, 00:10:58.512 "claimed": false, 00:10:58.512 "zoned": false, 00:10:58.512 "supported_io_types": { 00:10:58.512 "read": true, 00:10:58.512 "write": true, 00:10:58.512 "unmap": true, 00:10:58.512 "flush": true, 00:10:58.512 "reset": true, 00:10:58.512 "nvme_admin": false, 00:10:58.512 "nvme_io": false, 00:10:58.512 "nvme_io_md": false, 00:10:58.512 "write_zeroes": true, 00:10:58.512 "zcopy": true, 00:10:58.512 "get_zone_info": false, 00:10:58.512 "zone_management": false, 00:10:58.512 "zone_append": false, 00:10:58.512 "compare": false, 00:10:58.512 "compare_and_write": false, 00:10:58.512 "abort": true, 00:10:58.512 "seek_hole": false, 00:10:58.512 "seek_data": false, 00:10:58.512 "copy": true, 00:10:58.512 "nvme_iov_md": false 00:10:58.512 }, 00:10:58.512 "memory_domains": [ 00:10:58.512 { 00:10:58.512 "dma_device_id": "system", 00:10:58.512 "dma_device_type": 1 00:10:58.512 }, 00:10:58.512 { 00:10:58.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.512 "dma_device_type": 2 00:10:58.512 } 00:10:58.512 ], 00:10:58.512 "driver_specific": {} 00:10:58.512 } 00:10:58.512 ] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.512 BaseBdev4 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.512 [ 00:10:58.512 { 00:10:58.512 "name": "BaseBdev4", 00:10:58.512 "aliases": [ 00:10:58.512 "c6151db0-767d-4623-8b1c-0d4e1731c5a7" 00:10:58.512 ], 00:10:58.512 "product_name": "Malloc disk", 00:10:58.512 "block_size": 512, 00:10:58.512 "num_blocks": 65536, 00:10:58.512 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:10:58.512 "assigned_rate_limits": { 00:10:58.512 "rw_ios_per_sec": 0, 00:10:58.512 "rw_mbytes_per_sec": 0, 00:10:58.512 "r_mbytes_per_sec": 0, 00:10:58.512 "w_mbytes_per_sec": 0 00:10:58.512 }, 00:10:58.512 "claimed": false, 00:10:58.512 "zoned": false, 00:10:58.512 "supported_io_types": { 00:10:58.512 "read": true, 00:10:58.512 "write": true, 00:10:58.512 "unmap": true, 00:10:58.512 "flush": true, 00:10:58.512 "reset": true, 00:10:58.512 "nvme_admin": false, 00:10:58.512 "nvme_io": false, 00:10:58.512 "nvme_io_md": false, 00:10:58.512 "write_zeroes": true, 00:10:58.512 "zcopy": true, 00:10:58.512 "get_zone_info": false, 00:10:58.512 "zone_management": false, 00:10:58.512 "zone_append": false, 00:10:58.512 "compare": false, 00:10:58.512 "compare_and_write": false, 00:10:58.512 "abort": true, 00:10:58.512 "seek_hole": false, 00:10:58.512 "seek_data": false, 00:10:58.512 "copy": true, 00:10:58.512 "nvme_iov_md": false 00:10:58.512 }, 00:10:58.512 "memory_domains": [ 00:10:58.512 { 00:10:58.512 "dma_device_id": "system", 00:10:58.512 "dma_device_type": 1 00:10:58.512 }, 00:10:58.512 { 00:10:58.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.512 "dma_device_type": 2 00:10:58.512 } 00:10:58.512 ], 00:10:58.512 "driver_specific": {} 00:10:58.512 } 00:10:58.512 ] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.512 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.512 [2024-09-28 16:12:13.191858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.512 [2024-09-28 16:12:13.191951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.512 [2024-09-28 16:12:13.192016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.512 [2024-09-28 16:12:13.194098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.512 [2024-09-28 16:12:13.194207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.771 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.772 "name": "Existed_Raid", 00:10:58.772 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:10:58.772 "strip_size_kb": 64, 00:10:58.772 "state": "configuring", 00:10:58.772 "raid_level": "raid0", 00:10:58.772 "superblock": true, 00:10:58.772 "num_base_bdevs": 4, 00:10:58.772 "num_base_bdevs_discovered": 3, 00:10:58.772 "num_base_bdevs_operational": 4, 00:10:58.772 "base_bdevs_list": [ 00:10:58.772 { 00:10:58.772 "name": "BaseBdev1", 00:10:58.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.772 "is_configured": false, 00:10:58.772 "data_offset": 0, 00:10:58.772 "data_size": 0 00:10:58.772 }, 00:10:58.772 { 00:10:58.772 "name": "BaseBdev2", 00:10:58.772 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:10:58.772 "is_configured": true, 00:10:58.772 "data_offset": 2048, 00:10:58.772 "data_size": 63488 00:10:58.772 }, 00:10:58.772 { 00:10:58.772 "name": "BaseBdev3", 00:10:58.772 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:10:58.772 "is_configured": true, 00:10:58.772 "data_offset": 2048, 00:10:58.772 "data_size": 63488 00:10:58.772 }, 00:10:58.772 { 00:10:58.772 "name": "BaseBdev4", 00:10:58.772 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:10:58.772 "is_configured": true, 00:10:58.772 "data_offset": 2048, 00:10:58.772 "data_size": 63488 00:10:58.772 } 00:10:58.772 ] 00:10:58.772 }' 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.772 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.031 [2024-09-28 16:12:13.619114] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.031 "name": "Existed_Raid", 00:10:59.031 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:10:59.031 "strip_size_kb": 64, 00:10:59.031 "state": "configuring", 00:10:59.031 "raid_level": "raid0", 00:10:59.031 "superblock": true, 00:10:59.031 "num_base_bdevs": 4, 00:10:59.031 "num_base_bdevs_discovered": 2, 00:10:59.031 "num_base_bdevs_operational": 4, 00:10:59.031 "base_bdevs_list": [ 00:10:59.031 { 00:10:59.031 "name": "BaseBdev1", 00:10:59.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.031 "is_configured": false, 00:10:59.031 "data_offset": 0, 00:10:59.031 "data_size": 0 00:10:59.031 }, 00:10:59.031 { 00:10:59.031 "name": null, 00:10:59.031 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:10:59.031 "is_configured": false, 00:10:59.031 "data_offset": 0, 00:10:59.031 "data_size": 63488 00:10:59.031 }, 00:10:59.031 { 00:10:59.031 "name": "BaseBdev3", 00:10:59.031 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:10:59.031 "is_configured": true, 00:10:59.031 "data_offset": 2048, 00:10:59.031 "data_size": 63488 00:10:59.031 }, 00:10:59.031 { 00:10:59.031 "name": "BaseBdev4", 00:10:59.031 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:10:59.031 "is_configured": true, 00:10:59.031 "data_offset": 2048, 00:10:59.031 "data_size": 63488 00:10:59.031 } 00:10:59.031 ] 00:10:59.031 }' 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.031 16:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.600 [2024-09-28 16:12:14.196353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.600 BaseBdev1 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.600 [ 00:10:59.600 { 00:10:59.600 "name": "BaseBdev1", 00:10:59.600 "aliases": [ 00:10:59.600 "0913aef6-9c9f-4ace-8206-446dced1ebc5" 00:10:59.600 ], 00:10:59.600 "product_name": "Malloc disk", 00:10:59.600 "block_size": 512, 00:10:59.600 "num_blocks": 65536, 00:10:59.600 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:10:59.600 "assigned_rate_limits": { 00:10:59.600 "rw_ios_per_sec": 0, 00:10:59.600 "rw_mbytes_per_sec": 0, 00:10:59.600 "r_mbytes_per_sec": 0, 00:10:59.600 "w_mbytes_per_sec": 0 00:10:59.600 }, 00:10:59.600 "claimed": true, 00:10:59.600 "claim_type": "exclusive_write", 00:10:59.600 "zoned": false, 00:10:59.600 "supported_io_types": { 00:10:59.600 "read": true, 00:10:59.600 "write": true, 00:10:59.600 "unmap": true, 00:10:59.600 "flush": true, 00:10:59.600 "reset": true, 00:10:59.600 "nvme_admin": false, 00:10:59.600 "nvme_io": false, 00:10:59.600 "nvme_io_md": false, 00:10:59.600 "write_zeroes": true, 00:10:59.600 "zcopy": true, 00:10:59.600 "get_zone_info": false, 00:10:59.600 "zone_management": false, 00:10:59.600 "zone_append": false, 00:10:59.600 "compare": false, 00:10:59.600 "compare_and_write": false, 00:10:59.600 "abort": true, 00:10:59.600 "seek_hole": false, 00:10:59.600 "seek_data": false, 00:10:59.600 "copy": true, 00:10:59.600 "nvme_iov_md": false 00:10:59.600 }, 00:10:59.600 "memory_domains": [ 00:10:59.600 { 00:10:59.600 "dma_device_id": "system", 00:10:59.600 "dma_device_type": 1 00:10:59.600 }, 00:10:59.600 { 00:10:59.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.600 "dma_device_type": 2 00:10:59.600 } 00:10:59.600 ], 00:10:59.600 "driver_specific": {} 00:10:59.600 } 00:10:59.600 ] 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.600 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.601 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.601 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.601 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.601 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.601 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.601 "name": "Existed_Raid", 00:10:59.601 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:10:59.601 "strip_size_kb": 64, 00:10:59.601 "state": "configuring", 00:10:59.601 "raid_level": "raid0", 00:10:59.601 "superblock": true, 00:10:59.601 "num_base_bdevs": 4, 00:10:59.601 "num_base_bdevs_discovered": 3, 00:10:59.601 "num_base_bdevs_operational": 4, 00:10:59.601 "base_bdevs_list": [ 00:10:59.601 { 00:10:59.601 "name": "BaseBdev1", 00:10:59.601 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:10:59.601 "is_configured": true, 00:10:59.601 "data_offset": 2048, 00:10:59.601 "data_size": 63488 00:10:59.601 }, 00:10:59.601 { 00:10:59.601 "name": null, 00:10:59.601 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:10:59.601 "is_configured": false, 00:10:59.601 "data_offset": 0, 00:10:59.601 "data_size": 63488 00:10:59.601 }, 00:10:59.601 { 00:10:59.601 "name": "BaseBdev3", 00:10:59.601 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:10:59.601 "is_configured": true, 00:10:59.601 "data_offset": 2048, 00:10:59.601 "data_size": 63488 00:10:59.601 }, 00:10:59.601 { 00:10:59.601 "name": "BaseBdev4", 00:10:59.601 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:10:59.601 "is_configured": true, 00:10:59.601 "data_offset": 2048, 00:10:59.601 "data_size": 63488 00:10:59.601 } 00:10:59.601 ] 00:10:59.601 }' 00:10:59.601 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.601 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 [2024-09-28 16:12:14.683534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.169 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.170 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.170 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.170 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.170 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.170 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.170 "name": "Existed_Raid", 00:11:00.170 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:11:00.170 "strip_size_kb": 64, 00:11:00.170 "state": "configuring", 00:11:00.170 "raid_level": "raid0", 00:11:00.170 "superblock": true, 00:11:00.170 "num_base_bdevs": 4, 00:11:00.170 "num_base_bdevs_discovered": 2, 00:11:00.170 "num_base_bdevs_operational": 4, 00:11:00.170 "base_bdevs_list": [ 00:11:00.170 { 00:11:00.170 "name": "BaseBdev1", 00:11:00.170 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:11:00.170 "is_configured": true, 00:11:00.170 "data_offset": 2048, 00:11:00.170 "data_size": 63488 00:11:00.170 }, 00:11:00.170 { 00:11:00.170 "name": null, 00:11:00.170 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:11:00.170 "is_configured": false, 00:11:00.170 "data_offset": 0, 00:11:00.170 "data_size": 63488 00:11:00.170 }, 00:11:00.170 { 00:11:00.170 "name": null, 00:11:00.170 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:11:00.170 "is_configured": false, 00:11:00.170 "data_offset": 0, 00:11:00.170 "data_size": 63488 00:11:00.170 }, 00:11:00.170 { 00:11:00.170 "name": "BaseBdev4", 00:11:00.170 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:11:00.170 "is_configured": true, 00:11:00.170 "data_offset": 2048, 00:11:00.170 "data_size": 63488 00:11:00.170 } 00:11:00.170 ] 00:11:00.170 }' 00:11:00.170 16:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.170 16:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.428 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.428 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.428 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.428 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.687 [2024-09-28 16:12:15.158759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.687 "name": "Existed_Raid", 00:11:00.687 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:11:00.687 "strip_size_kb": 64, 00:11:00.687 "state": "configuring", 00:11:00.687 "raid_level": "raid0", 00:11:00.687 "superblock": true, 00:11:00.687 "num_base_bdevs": 4, 00:11:00.687 "num_base_bdevs_discovered": 3, 00:11:00.687 "num_base_bdevs_operational": 4, 00:11:00.687 "base_bdevs_list": [ 00:11:00.687 { 00:11:00.687 "name": "BaseBdev1", 00:11:00.687 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:11:00.687 "is_configured": true, 00:11:00.687 "data_offset": 2048, 00:11:00.687 "data_size": 63488 00:11:00.687 }, 00:11:00.687 { 00:11:00.687 "name": null, 00:11:00.687 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:11:00.687 "is_configured": false, 00:11:00.687 "data_offset": 0, 00:11:00.687 "data_size": 63488 00:11:00.687 }, 00:11:00.687 { 00:11:00.687 "name": "BaseBdev3", 00:11:00.687 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:11:00.687 "is_configured": true, 00:11:00.687 "data_offset": 2048, 00:11:00.687 "data_size": 63488 00:11:00.687 }, 00:11:00.687 { 00:11:00.687 "name": "BaseBdev4", 00:11:00.687 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:11:00.687 "is_configured": true, 00:11:00.687 "data_offset": 2048, 00:11:00.687 "data_size": 63488 00:11:00.687 } 00:11:00.687 ] 00:11:00.687 }' 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.687 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.947 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.947 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.947 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.947 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.947 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.947 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.207 [2024-09-28 16:12:15.637941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.207 "name": "Existed_Raid", 00:11:01.207 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:11:01.207 "strip_size_kb": 64, 00:11:01.207 "state": "configuring", 00:11:01.207 "raid_level": "raid0", 00:11:01.207 "superblock": true, 00:11:01.207 "num_base_bdevs": 4, 00:11:01.207 "num_base_bdevs_discovered": 2, 00:11:01.207 "num_base_bdevs_operational": 4, 00:11:01.207 "base_bdevs_list": [ 00:11:01.207 { 00:11:01.207 "name": null, 00:11:01.207 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:11:01.207 "is_configured": false, 00:11:01.207 "data_offset": 0, 00:11:01.207 "data_size": 63488 00:11:01.207 }, 00:11:01.207 { 00:11:01.207 "name": null, 00:11:01.207 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:11:01.207 "is_configured": false, 00:11:01.207 "data_offset": 0, 00:11:01.207 "data_size": 63488 00:11:01.207 }, 00:11:01.207 { 00:11:01.207 "name": "BaseBdev3", 00:11:01.207 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:11:01.207 "is_configured": true, 00:11:01.207 "data_offset": 2048, 00:11:01.207 "data_size": 63488 00:11:01.207 }, 00:11:01.207 { 00:11:01.207 "name": "BaseBdev4", 00:11:01.207 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:11:01.207 "is_configured": true, 00:11:01.207 "data_offset": 2048, 00:11:01.207 "data_size": 63488 00:11:01.207 } 00:11:01.207 ] 00:11:01.207 }' 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.207 16:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.774 [2024-09-28 16:12:16.232444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.774 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.774 "name": "Existed_Raid", 00:11:01.774 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:11:01.774 "strip_size_kb": 64, 00:11:01.774 "state": "configuring", 00:11:01.774 "raid_level": "raid0", 00:11:01.774 "superblock": true, 00:11:01.774 "num_base_bdevs": 4, 00:11:01.774 "num_base_bdevs_discovered": 3, 00:11:01.774 "num_base_bdevs_operational": 4, 00:11:01.774 "base_bdevs_list": [ 00:11:01.774 { 00:11:01.774 "name": null, 00:11:01.774 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:11:01.774 "is_configured": false, 00:11:01.774 "data_offset": 0, 00:11:01.774 "data_size": 63488 00:11:01.774 }, 00:11:01.774 { 00:11:01.774 "name": "BaseBdev2", 00:11:01.774 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:11:01.774 "is_configured": true, 00:11:01.774 "data_offset": 2048, 00:11:01.774 "data_size": 63488 00:11:01.774 }, 00:11:01.774 { 00:11:01.774 "name": "BaseBdev3", 00:11:01.774 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:11:01.774 "is_configured": true, 00:11:01.774 "data_offset": 2048, 00:11:01.774 "data_size": 63488 00:11:01.774 }, 00:11:01.774 { 00:11:01.774 "name": "BaseBdev4", 00:11:01.775 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:11:01.775 "is_configured": true, 00:11:01.775 "data_offset": 2048, 00:11:01.775 "data_size": 63488 00:11:01.775 } 00:11:01.775 ] 00:11:01.775 }' 00:11:01.775 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.775 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.034 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.035 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0913aef6-9c9f-4ace-8206-446dced1ebc5 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 NewBaseBdev 00:11:02.300 [2024-09-28 16:12:16.788731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:02.300 [2024-09-28 16:12:16.788998] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:02.300 [2024-09-28 16:12:16.789011] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.300 [2024-09-28 16:12:16.789325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:02.300 [2024-09-28 16:12:16.789469] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:02.300 [2024-09-28 16:12:16.789499] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:02.300 [2024-09-28 16:12:16.789659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 [ 00:11:02.300 { 00:11:02.300 "name": "NewBaseBdev", 00:11:02.300 "aliases": [ 00:11:02.300 "0913aef6-9c9f-4ace-8206-446dced1ebc5" 00:11:02.300 ], 00:11:02.300 "product_name": "Malloc disk", 00:11:02.300 "block_size": 512, 00:11:02.300 "num_blocks": 65536, 00:11:02.300 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:11:02.300 "assigned_rate_limits": { 00:11:02.300 "rw_ios_per_sec": 0, 00:11:02.300 "rw_mbytes_per_sec": 0, 00:11:02.300 "r_mbytes_per_sec": 0, 00:11:02.300 "w_mbytes_per_sec": 0 00:11:02.300 }, 00:11:02.300 "claimed": true, 00:11:02.300 "claim_type": "exclusive_write", 00:11:02.300 "zoned": false, 00:11:02.300 "supported_io_types": { 00:11:02.300 "read": true, 00:11:02.300 "write": true, 00:11:02.300 "unmap": true, 00:11:02.300 "flush": true, 00:11:02.300 "reset": true, 00:11:02.300 "nvme_admin": false, 00:11:02.300 "nvme_io": false, 00:11:02.300 "nvme_io_md": false, 00:11:02.300 "write_zeroes": true, 00:11:02.300 "zcopy": true, 00:11:02.300 "get_zone_info": false, 00:11:02.300 "zone_management": false, 00:11:02.300 "zone_append": false, 00:11:02.300 "compare": false, 00:11:02.300 "compare_and_write": false, 00:11:02.300 "abort": true, 00:11:02.300 "seek_hole": false, 00:11:02.300 "seek_data": false, 00:11:02.300 "copy": true, 00:11:02.300 "nvme_iov_md": false 00:11:02.300 }, 00:11:02.300 "memory_domains": [ 00:11:02.300 { 00:11:02.300 "dma_device_id": "system", 00:11:02.300 "dma_device_type": 1 00:11:02.300 }, 00:11:02.300 { 00:11:02.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.300 "dma_device_type": 2 00:11:02.300 } 00:11:02.300 ], 00:11:02.300 "driver_specific": {} 00:11:02.300 } 00:11:02.300 ] 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.300 "name": "Existed_Raid", 00:11:02.300 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:11:02.300 "strip_size_kb": 64, 00:11:02.300 "state": "online", 00:11:02.300 "raid_level": "raid0", 00:11:02.300 "superblock": true, 00:11:02.300 "num_base_bdevs": 4, 00:11:02.300 "num_base_bdevs_discovered": 4, 00:11:02.300 "num_base_bdevs_operational": 4, 00:11:02.300 "base_bdevs_list": [ 00:11:02.300 { 00:11:02.300 "name": "NewBaseBdev", 00:11:02.300 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:11:02.300 "is_configured": true, 00:11:02.300 "data_offset": 2048, 00:11:02.300 "data_size": 63488 00:11:02.300 }, 00:11:02.300 { 00:11:02.300 "name": "BaseBdev2", 00:11:02.300 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:11:02.300 "is_configured": true, 00:11:02.300 "data_offset": 2048, 00:11:02.300 "data_size": 63488 00:11:02.300 }, 00:11:02.300 { 00:11:02.300 "name": "BaseBdev3", 00:11:02.300 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:11:02.300 "is_configured": true, 00:11:02.300 "data_offset": 2048, 00:11:02.300 "data_size": 63488 00:11:02.300 }, 00:11:02.300 { 00:11:02.300 "name": "BaseBdev4", 00:11:02.300 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:11:02.300 "is_configured": true, 00:11:02.300 "data_offset": 2048, 00:11:02.300 "data_size": 63488 00:11:02.300 } 00:11:02.300 ] 00:11:02.300 }' 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.300 16:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.560 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:02.560 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.560 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.560 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.560 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.820 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:02.820 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.820 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.820 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.820 [2024-09-28 16:12:17.252305] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.820 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.820 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.820 "name": "Existed_Raid", 00:11:02.820 "aliases": [ 00:11:02.820 "b5c97eb1-62ae-444f-b56a-1d78637c5dc4" 00:11:02.820 ], 00:11:02.820 "product_name": "Raid Volume", 00:11:02.820 "block_size": 512, 00:11:02.820 "num_blocks": 253952, 00:11:02.820 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:11:02.820 "assigned_rate_limits": { 00:11:02.820 "rw_ios_per_sec": 0, 00:11:02.820 "rw_mbytes_per_sec": 0, 00:11:02.820 "r_mbytes_per_sec": 0, 00:11:02.820 "w_mbytes_per_sec": 0 00:11:02.820 }, 00:11:02.820 "claimed": false, 00:11:02.820 "zoned": false, 00:11:02.820 "supported_io_types": { 00:11:02.820 "read": true, 00:11:02.820 "write": true, 00:11:02.820 "unmap": true, 00:11:02.820 "flush": true, 00:11:02.820 "reset": true, 00:11:02.820 "nvme_admin": false, 00:11:02.820 "nvme_io": false, 00:11:02.820 "nvme_io_md": false, 00:11:02.820 "write_zeroes": true, 00:11:02.820 "zcopy": false, 00:11:02.820 "get_zone_info": false, 00:11:02.820 "zone_management": false, 00:11:02.820 "zone_append": false, 00:11:02.820 "compare": false, 00:11:02.820 "compare_and_write": false, 00:11:02.820 "abort": false, 00:11:02.820 "seek_hole": false, 00:11:02.820 "seek_data": false, 00:11:02.820 "copy": false, 00:11:02.820 "nvme_iov_md": false 00:11:02.820 }, 00:11:02.820 "memory_domains": [ 00:11:02.820 { 00:11:02.820 "dma_device_id": "system", 00:11:02.820 "dma_device_type": 1 00:11:02.820 }, 00:11:02.820 { 00:11:02.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.820 "dma_device_type": 2 00:11:02.820 }, 00:11:02.820 { 00:11:02.820 "dma_device_id": "system", 00:11:02.820 "dma_device_type": 1 00:11:02.820 }, 00:11:02.820 { 00:11:02.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.820 "dma_device_type": 2 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "dma_device_id": "system", 00:11:02.821 "dma_device_type": 1 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.821 "dma_device_type": 2 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "dma_device_id": "system", 00:11:02.821 "dma_device_type": 1 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.821 "dma_device_type": 2 00:11:02.821 } 00:11:02.821 ], 00:11:02.821 "driver_specific": { 00:11:02.821 "raid": { 00:11:02.821 "uuid": "b5c97eb1-62ae-444f-b56a-1d78637c5dc4", 00:11:02.821 "strip_size_kb": 64, 00:11:02.821 "state": "online", 00:11:02.821 "raid_level": "raid0", 00:11:02.821 "superblock": true, 00:11:02.821 "num_base_bdevs": 4, 00:11:02.821 "num_base_bdevs_discovered": 4, 00:11:02.821 "num_base_bdevs_operational": 4, 00:11:02.821 "base_bdevs_list": [ 00:11:02.821 { 00:11:02.821 "name": "NewBaseBdev", 00:11:02.821 "uuid": "0913aef6-9c9f-4ace-8206-446dced1ebc5", 00:11:02.821 "is_configured": true, 00:11:02.821 "data_offset": 2048, 00:11:02.821 "data_size": 63488 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "name": "BaseBdev2", 00:11:02.821 "uuid": "2051587e-f169-4fd9-8e87-8aed8748584c", 00:11:02.821 "is_configured": true, 00:11:02.821 "data_offset": 2048, 00:11:02.821 "data_size": 63488 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "name": "BaseBdev3", 00:11:02.821 "uuid": "84228dbf-dd9b-4d0a-8486-a98fcb732d7f", 00:11:02.821 "is_configured": true, 00:11:02.821 "data_offset": 2048, 00:11:02.821 "data_size": 63488 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "name": "BaseBdev4", 00:11:02.821 "uuid": "c6151db0-767d-4623-8b1c-0d4e1731c5a7", 00:11:02.821 "is_configured": true, 00:11:02.821 "data_offset": 2048, 00:11:02.821 "data_size": 63488 00:11:02.821 } 00:11:02.821 ] 00:11:02.821 } 00:11:02.821 } 00:11:02.821 }' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:02.821 BaseBdev2 00:11:02.821 BaseBdev3 00:11:02.821 BaseBdev4' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.821 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.081 [2024-09-28 16:12:17.587367] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.081 [2024-09-28 16:12:17.587435] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.081 [2024-09-28 16:12:17.587568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.081 [2024-09-28 16:12:17.587655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.081 [2024-09-28 16:12:17.587710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70086 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70086 ']' 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70086 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70086 00:11:03.081 killing process with pid 70086 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70086' 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70086 00:11:03.081 [2024-09-28 16:12:17.634525] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.081 16:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70086 00:11:03.651 [2024-09-28 16:12:18.047007] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.032 16:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:05.032 00:11:05.032 real 0m11.724s 00:11:05.032 user 0m18.250s 00:11:05.032 sys 0m2.288s 00:11:05.032 ************************************ 00:11:05.032 END TEST raid_state_function_test_sb 00:11:05.032 ************************************ 00:11:05.032 16:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.032 16:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.032 16:12:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:05.032 16:12:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.032 16:12:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.032 16:12:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.032 ************************************ 00:11:05.032 START TEST raid_superblock_test 00:11:05.032 ************************************ 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70752 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70752 00:11:05.032 16:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70752 ']' 00:11:05.033 16:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.033 16:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.033 16:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.033 16:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.033 16:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.033 [2024-09-28 16:12:19.534506] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:05.033 [2024-09-28 16:12:19.534686] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70752 ] 00:11:05.033 [2024-09-28 16:12:19.696588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.292 [2024-09-28 16:12:19.932146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.551 [2024-09-28 16:12:20.169156] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.551 [2024-09-28 16:12:20.169289] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.811 malloc1 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.811 [2024-09-28 16:12:20.402956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:05.811 [2024-09-28 16:12:20.403089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.811 [2024-09-28 16:12:20.403132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:05.811 [2024-09-28 16:12:20.403168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.811 [2024-09-28 16:12:20.405575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.811 [2024-09-28 16:12:20.405643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:05.811 pt1 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.811 malloc2 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.811 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.811 [2024-09-28 16:12:20.493386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.811 [2024-09-28 16:12:20.493499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.811 [2024-09-28 16:12:20.493540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:05.811 [2024-09-28 16:12:20.493585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.071 [2024-09-28 16:12:20.495929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.071 [2024-09-28 16:12:20.496003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.071 pt2 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 malloc3 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 [2024-09-28 16:12:20.551186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.071 [2024-09-28 16:12:20.551290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.071 [2024-09-28 16:12:20.551346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:06.071 [2024-09-28 16:12:20.551380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.071 [2024-09-28 16:12:20.553730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.071 [2024-09-28 16:12:20.553813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.071 pt3 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 malloc4 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 [2024-09-28 16:12:20.617138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:06.071 [2024-09-28 16:12:20.617261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.071 [2024-09-28 16:12:20.617301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:06.071 [2024-09-28 16:12:20.617352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.071 [2024-09-28 16:12:20.619722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.071 [2024-09-28 16:12:20.619795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:06.071 pt4 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 [2024-09-28 16:12:20.629157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.071 [2024-09-28 16:12:20.631210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.071 [2024-09-28 16:12:20.631286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.071 [2024-09-28 16:12:20.631348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:06.071 [2024-09-28 16:12:20.631534] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:06.071 [2024-09-28 16:12:20.631558] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.071 [2024-09-28 16:12:20.631830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.071 [2024-09-28 16:12:20.632000] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:06.071 [2024-09-28 16:12:20.632015] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:06.071 [2024-09-28 16:12:20.632153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.072 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.072 "name": "raid_bdev1", 00:11:06.072 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:06.072 "strip_size_kb": 64, 00:11:06.072 "state": "online", 00:11:06.072 "raid_level": "raid0", 00:11:06.072 "superblock": true, 00:11:06.072 "num_base_bdevs": 4, 00:11:06.072 "num_base_bdevs_discovered": 4, 00:11:06.072 "num_base_bdevs_operational": 4, 00:11:06.072 "base_bdevs_list": [ 00:11:06.072 { 00:11:06.072 "name": "pt1", 00:11:06.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.072 "is_configured": true, 00:11:06.072 "data_offset": 2048, 00:11:06.072 "data_size": 63488 00:11:06.072 }, 00:11:06.072 { 00:11:06.072 "name": "pt2", 00:11:06.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.072 "is_configured": true, 00:11:06.072 "data_offset": 2048, 00:11:06.072 "data_size": 63488 00:11:06.072 }, 00:11:06.072 { 00:11:06.072 "name": "pt3", 00:11:06.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.072 "is_configured": true, 00:11:06.072 "data_offset": 2048, 00:11:06.072 "data_size": 63488 00:11:06.072 }, 00:11:06.072 { 00:11:06.072 "name": "pt4", 00:11:06.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.072 "is_configured": true, 00:11:06.072 "data_offset": 2048, 00:11:06.072 "data_size": 63488 00:11:06.072 } 00:11:06.072 ] 00:11:06.072 }' 00:11:06.072 16:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.072 16:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.639 [2024-09-28 16:12:21.088672] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.639 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.639 "name": "raid_bdev1", 00:11:06.639 "aliases": [ 00:11:06.639 "6195a333-a244-4e3a-bf56-e2da17d57c25" 00:11:06.639 ], 00:11:06.639 "product_name": "Raid Volume", 00:11:06.639 "block_size": 512, 00:11:06.639 "num_blocks": 253952, 00:11:06.639 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:06.639 "assigned_rate_limits": { 00:11:06.639 "rw_ios_per_sec": 0, 00:11:06.639 "rw_mbytes_per_sec": 0, 00:11:06.639 "r_mbytes_per_sec": 0, 00:11:06.639 "w_mbytes_per_sec": 0 00:11:06.639 }, 00:11:06.639 "claimed": false, 00:11:06.639 "zoned": false, 00:11:06.639 "supported_io_types": { 00:11:06.639 "read": true, 00:11:06.639 "write": true, 00:11:06.639 "unmap": true, 00:11:06.639 "flush": true, 00:11:06.639 "reset": true, 00:11:06.639 "nvme_admin": false, 00:11:06.639 "nvme_io": false, 00:11:06.639 "nvme_io_md": false, 00:11:06.639 "write_zeroes": true, 00:11:06.639 "zcopy": false, 00:11:06.639 "get_zone_info": false, 00:11:06.639 "zone_management": false, 00:11:06.639 "zone_append": false, 00:11:06.639 "compare": false, 00:11:06.639 "compare_and_write": false, 00:11:06.639 "abort": false, 00:11:06.639 "seek_hole": false, 00:11:06.639 "seek_data": false, 00:11:06.639 "copy": false, 00:11:06.639 "nvme_iov_md": false 00:11:06.639 }, 00:11:06.640 "memory_domains": [ 00:11:06.640 { 00:11:06.640 "dma_device_id": "system", 00:11:06.640 "dma_device_type": 1 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.640 "dma_device_type": 2 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "system", 00:11:06.640 "dma_device_type": 1 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.640 "dma_device_type": 2 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "system", 00:11:06.640 "dma_device_type": 1 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.640 "dma_device_type": 2 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "system", 00:11:06.640 "dma_device_type": 1 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.640 "dma_device_type": 2 00:11:06.640 } 00:11:06.640 ], 00:11:06.640 "driver_specific": { 00:11:06.640 "raid": { 00:11:06.640 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:06.640 "strip_size_kb": 64, 00:11:06.640 "state": "online", 00:11:06.640 "raid_level": "raid0", 00:11:06.640 "superblock": true, 00:11:06.640 "num_base_bdevs": 4, 00:11:06.640 "num_base_bdevs_discovered": 4, 00:11:06.640 "num_base_bdevs_operational": 4, 00:11:06.640 "base_bdevs_list": [ 00:11:06.640 { 00:11:06.640 "name": "pt1", 00:11:06.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.640 "is_configured": true, 00:11:06.640 "data_offset": 2048, 00:11:06.640 "data_size": 63488 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "name": "pt2", 00:11:06.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.640 "is_configured": true, 00:11:06.640 "data_offset": 2048, 00:11:06.640 "data_size": 63488 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "name": "pt3", 00:11:06.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.640 "is_configured": true, 00:11:06.640 "data_offset": 2048, 00:11:06.640 "data_size": 63488 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "name": "pt4", 00:11:06.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.640 "is_configured": true, 00:11:06.640 "data_offset": 2048, 00:11:06.640 "data_size": 63488 00:11:06.640 } 00:11:06.640 ] 00:11:06.640 } 00:11:06.640 } 00:11:06.640 }' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:06.640 pt2 00:11:06.640 pt3 00:11:06.640 pt4' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.640 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:06.900 [2024-09-28 16:12:21.396041] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6195a333-a244-4e3a-bf56-e2da17d57c25 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6195a333-a244-4e3a-bf56-e2da17d57c25 ']' 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.900 [2024-09-28 16:12:21.471639] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.900 [2024-09-28 16:12:21.471706] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.900 [2024-09-28 16:12:21.471800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.900 [2024-09-28 16:12:21.471888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.900 [2024-09-28 16:12:21.471939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.900 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.901 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.160 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.160 [2024-09-28 16:12:21.639366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:07.160 [2024-09-28 16:12:21.641552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:07.160 [2024-09-28 16:12:21.641653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:07.161 [2024-09-28 16:12:21.641704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:07.161 [2024-09-28 16:12:21.641784] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:07.161 [2024-09-28 16:12:21.641855] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:07.161 [2024-09-28 16:12:21.641905] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:07.161 [2024-09-28 16:12:21.641955] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:07.161 [2024-09-28 16:12:21.642013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.161 [2024-09-28 16:12:21.642048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:07.161 request: 00:11:07.161 { 00:11:07.161 "name": "raid_bdev1", 00:11:07.161 "raid_level": "raid0", 00:11:07.161 "base_bdevs": [ 00:11:07.161 "malloc1", 00:11:07.161 "malloc2", 00:11:07.161 "malloc3", 00:11:07.161 "malloc4" 00:11:07.161 ], 00:11:07.161 "strip_size_kb": 64, 00:11:07.161 "superblock": false, 00:11:07.161 "method": "bdev_raid_create", 00:11:07.161 "req_id": 1 00:11:07.161 } 00:11:07.161 Got JSON-RPC error response 00:11:07.161 response: 00:11:07.161 { 00:11:07.161 "code": -17, 00:11:07.161 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:07.161 } 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.161 [2024-09-28 16:12:21.707257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.161 [2024-09-28 16:12:21.707346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.161 [2024-09-28 16:12:21.707398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:07.161 [2024-09-28 16:12:21.707429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.161 [2024-09-28 16:12:21.709957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.161 [2024-09-28 16:12:21.710045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.161 [2024-09-28 16:12:21.710144] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:07.161 [2024-09-28 16:12:21.710230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.161 pt1 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.161 "name": "raid_bdev1", 00:11:07.161 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:07.161 "strip_size_kb": 64, 00:11:07.161 "state": "configuring", 00:11:07.161 "raid_level": "raid0", 00:11:07.161 "superblock": true, 00:11:07.161 "num_base_bdevs": 4, 00:11:07.161 "num_base_bdevs_discovered": 1, 00:11:07.161 "num_base_bdevs_operational": 4, 00:11:07.161 "base_bdevs_list": [ 00:11:07.161 { 00:11:07.161 "name": "pt1", 00:11:07.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.161 "is_configured": true, 00:11:07.161 "data_offset": 2048, 00:11:07.161 "data_size": 63488 00:11:07.161 }, 00:11:07.161 { 00:11:07.161 "name": null, 00:11:07.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.161 "is_configured": false, 00:11:07.161 "data_offset": 2048, 00:11:07.161 "data_size": 63488 00:11:07.161 }, 00:11:07.161 { 00:11:07.161 "name": null, 00:11:07.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.161 "is_configured": false, 00:11:07.161 "data_offset": 2048, 00:11:07.161 "data_size": 63488 00:11:07.161 }, 00:11:07.161 { 00:11:07.161 "name": null, 00:11:07.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.161 "is_configured": false, 00:11:07.161 "data_offset": 2048, 00:11:07.161 "data_size": 63488 00:11:07.161 } 00:11:07.161 ] 00:11:07.161 }' 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.161 16:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.420 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:07.420 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.420 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.420 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.420 [2024-09-28 16:12:22.098599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.420 [2024-09-28 16:12:22.098724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.420 [2024-09-28 16:12:22.098766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:07.420 [2024-09-28 16:12:22.098798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.420 [2024-09-28 16:12:22.099345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.420 [2024-09-28 16:12:22.099408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.420 [2024-09-28 16:12:22.099523] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:07.420 [2024-09-28 16:12:22.099577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.420 pt2 00:11:07.420 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.420 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.680 [2024-09-28 16:12:22.110576] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.680 "name": "raid_bdev1", 00:11:07.680 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:07.680 "strip_size_kb": 64, 00:11:07.680 "state": "configuring", 00:11:07.680 "raid_level": "raid0", 00:11:07.680 "superblock": true, 00:11:07.680 "num_base_bdevs": 4, 00:11:07.680 "num_base_bdevs_discovered": 1, 00:11:07.680 "num_base_bdevs_operational": 4, 00:11:07.680 "base_bdevs_list": [ 00:11:07.680 { 00:11:07.680 "name": "pt1", 00:11:07.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.680 "is_configured": true, 00:11:07.680 "data_offset": 2048, 00:11:07.680 "data_size": 63488 00:11:07.680 }, 00:11:07.680 { 00:11:07.680 "name": null, 00:11:07.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.680 "is_configured": false, 00:11:07.680 "data_offset": 0, 00:11:07.680 "data_size": 63488 00:11:07.680 }, 00:11:07.680 { 00:11:07.680 "name": null, 00:11:07.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.680 "is_configured": false, 00:11:07.680 "data_offset": 2048, 00:11:07.680 "data_size": 63488 00:11:07.680 }, 00:11:07.680 { 00:11:07.680 "name": null, 00:11:07.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.680 "is_configured": false, 00:11:07.680 "data_offset": 2048, 00:11:07.680 "data_size": 63488 00:11:07.680 } 00:11:07.680 ] 00:11:07.680 }' 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.680 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.941 [2024-09-28 16:12:22.525847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.941 [2024-09-28 16:12:22.525939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.941 [2024-09-28 16:12:22.525977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:07.941 [2024-09-28 16:12:22.526004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.941 [2024-09-28 16:12:22.526493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.941 [2024-09-28 16:12:22.526548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.941 [2024-09-28 16:12:22.526667] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:07.941 [2024-09-28 16:12:22.526726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.941 pt2 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.941 [2024-09-28 16:12:22.537827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:07.941 [2024-09-28 16:12:22.537908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.941 [2024-09-28 16:12:22.537949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:07.941 [2024-09-28 16:12:22.537979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.941 [2024-09-28 16:12:22.538388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.941 [2024-09-28 16:12:22.538441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:07.941 [2024-09-28 16:12:22.538532] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:07.941 [2024-09-28 16:12:22.538582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.941 pt3 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.941 [2024-09-28 16:12:22.549781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:07.941 [2024-09-28 16:12:22.549865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.941 [2024-09-28 16:12:22.549917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:07.941 [2024-09-28 16:12:22.549943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.941 [2024-09-28 16:12:22.550334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.941 [2024-09-28 16:12:22.550385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:07.941 [2024-09-28 16:12:22.550468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:07.941 [2024-09-28 16:12:22.550520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:07.941 [2024-09-28 16:12:22.550663] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.941 [2024-09-28 16:12:22.550672] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.941 [2024-09-28 16:12:22.550928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:07.941 [2024-09-28 16:12:22.551078] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.941 [2024-09-28 16:12:22.551092] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:07.941 [2024-09-28 16:12:22.551211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.941 pt4 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.941 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.942 "name": "raid_bdev1", 00:11:07.942 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:07.942 "strip_size_kb": 64, 00:11:07.942 "state": "online", 00:11:07.942 "raid_level": "raid0", 00:11:07.942 "superblock": true, 00:11:07.942 "num_base_bdevs": 4, 00:11:07.942 "num_base_bdevs_discovered": 4, 00:11:07.942 "num_base_bdevs_operational": 4, 00:11:07.942 "base_bdevs_list": [ 00:11:07.942 { 00:11:07.942 "name": "pt1", 00:11:07.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.942 "is_configured": true, 00:11:07.942 "data_offset": 2048, 00:11:07.942 "data_size": 63488 00:11:07.942 }, 00:11:07.942 { 00:11:07.942 "name": "pt2", 00:11:07.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.942 "is_configured": true, 00:11:07.942 "data_offset": 2048, 00:11:07.942 "data_size": 63488 00:11:07.942 }, 00:11:07.942 { 00:11:07.942 "name": "pt3", 00:11:07.942 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.942 "is_configured": true, 00:11:07.942 "data_offset": 2048, 00:11:07.942 "data_size": 63488 00:11:07.942 }, 00:11:07.942 { 00:11:07.942 "name": "pt4", 00:11:07.942 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.942 "is_configured": true, 00:11:07.942 "data_offset": 2048, 00:11:07.942 "data_size": 63488 00:11:07.942 } 00:11:07.942 ] 00:11:07.942 }' 00:11:07.942 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.942 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.511 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:08.511 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:08.511 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.511 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.511 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.512 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.512 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.512 16:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.512 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.512 16:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.512 [2024-09-28 16:12:23.001447] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.512 "name": "raid_bdev1", 00:11:08.512 "aliases": [ 00:11:08.512 "6195a333-a244-4e3a-bf56-e2da17d57c25" 00:11:08.512 ], 00:11:08.512 "product_name": "Raid Volume", 00:11:08.512 "block_size": 512, 00:11:08.512 "num_blocks": 253952, 00:11:08.512 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:08.512 "assigned_rate_limits": { 00:11:08.512 "rw_ios_per_sec": 0, 00:11:08.512 "rw_mbytes_per_sec": 0, 00:11:08.512 "r_mbytes_per_sec": 0, 00:11:08.512 "w_mbytes_per_sec": 0 00:11:08.512 }, 00:11:08.512 "claimed": false, 00:11:08.512 "zoned": false, 00:11:08.512 "supported_io_types": { 00:11:08.512 "read": true, 00:11:08.512 "write": true, 00:11:08.512 "unmap": true, 00:11:08.512 "flush": true, 00:11:08.512 "reset": true, 00:11:08.512 "nvme_admin": false, 00:11:08.512 "nvme_io": false, 00:11:08.512 "nvme_io_md": false, 00:11:08.512 "write_zeroes": true, 00:11:08.512 "zcopy": false, 00:11:08.512 "get_zone_info": false, 00:11:08.512 "zone_management": false, 00:11:08.512 "zone_append": false, 00:11:08.512 "compare": false, 00:11:08.512 "compare_and_write": false, 00:11:08.512 "abort": false, 00:11:08.512 "seek_hole": false, 00:11:08.512 "seek_data": false, 00:11:08.512 "copy": false, 00:11:08.512 "nvme_iov_md": false 00:11:08.512 }, 00:11:08.512 "memory_domains": [ 00:11:08.512 { 00:11:08.512 "dma_device_id": "system", 00:11:08.512 "dma_device_type": 1 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.512 "dma_device_type": 2 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "dma_device_id": "system", 00:11:08.512 "dma_device_type": 1 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.512 "dma_device_type": 2 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "dma_device_id": "system", 00:11:08.512 "dma_device_type": 1 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.512 "dma_device_type": 2 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "dma_device_id": "system", 00:11:08.512 "dma_device_type": 1 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.512 "dma_device_type": 2 00:11:08.512 } 00:11:08.512 ], 00:11:08.512 "driver_specific": { 00:11:08.512 "raid": { 00:11:08.512 "uuid": "6195a333-a244-4e3a-bf56-e2da17d57c25", 00:11:08.512 "strip_size_kb": 64, 00:11:08.512 "state": "online", 00:11:08.512 "raid_level": "raid0", 00:11:08.512 "superblock": true, 00:11:08.512 "num_base_bdevs": 4, 00:11:08.512 "num_base_bdevs_discovered": 4, 00:11:08.512 "num_base_bdevs_operational": 4, 00:11:08.512 "base_bdevs_list": [ 00:11:08.512 { 00:11:08.512 "name": "pt1", 00:11:08.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.512 "is_configured": true, 00:11:08.512 "data_offset": 2048, 00:11:08.512 "data_size": 63488 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "name": "pt2", 00:11:08.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.512 "is_configured": true, 00:11:08.512 "data_offset": 2048, 00:11:08.512 "data_size": 63488 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "name": "pt3", 00:11:08.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.512 "is_configured": true, 00:11:08.512 "data_offset": 2048, 00:11:08.512 "data_size": 63488 00:11:08.512 }, 00:11:08.512 { 00:11:08.512 "name": "pt4", 00:11:08.512 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.512 "is_configured": true, 00:11:08.512 "data_offset": 2048, 00:11:08.512 "data_size": 63488 00:11:08.512 } 00:11:08.512 ] 00:11:08.512 } 00:11:08.512 } 00:11:08.512 }' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:08.512 pt2 00:11:08.512 pt3 00:11:08.512 pt4' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.512 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.772 [2024-09-28 16:12:23.312742] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6195a333-a244-4e3a-bf56-e2da17d57c25 '!=' 6195a333-a244-4e3a-bf56-e2da17d57c25 ']' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70752 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70752 ']' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70752 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70752 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70752' 00:11:08.772 killing process with pid 70752 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70752 00:11:08.772 [2024-09-28 16:12:23.396363] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.772 [2024-09-28 16:12:23.396509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.772 16:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70752 00:11:08.772 [2024-09-28 16:12:23.396620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.772 [2024-09-28 16:12:23.396633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:09.341 [2024-09-28 16:12:23.810660] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.721 16:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:10.721 00:11:10.721 real 0m5.699s 00:11:10.721 user 0m7.871s 00:11:10.721 sys 0m1.079s 00:11:10.721 16:12:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.721 16:12:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.721 ************************************ 00:11:10.721 END TEST raid_superblock_test 00:11:10.721 ************************************ 00:11:10.721 16:12:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:10.721 16:12:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:10.721 16:12:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.721 16:12:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.721 ************************************ 00:11:10.721 START TEST raid_read_error_test 00:11:10.721 ************************************ 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.721 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pJBrD6TB5d 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71022 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71022 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71022 ']' 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.722 16:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.722 [2024-09-28 16:12:25.328461] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:10.722 [2024-09-28 16:12:25.329186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:11:10.981 [2024-09-28 16:12:25.495061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.240 [2024-09-28 16:12:25.741055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.501 [2024-09-28 16:12:25.968902] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.501 [2024-09-28 16:12:25.968939] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.501 BaseBdev1_malloc 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.501 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 true 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 [2024-09-28 16:12:26.195644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:11.763 [2024-09-28 16:12:26.195762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.763 [2024-09-28 16:12:26.195797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:11.763 [2024-09-28 16:12:26.195828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.763 [2024-09-28 16:12:26.198189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.763 [2024-09-28 16:12:26.198286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:11.763 BaseBdev1 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 BaseBdev2_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 true 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 [2024-09-28 16:12:26.278862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:11.763 [2024-09-28 16:12:26.278965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.763 [2024-09-28 16:12:26.279014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:11.763 [2024-09-28 16:12:26.279044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.763 [2024-09-28 16:12:26.281374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.763 [2024-09-28 16:12:26.281456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:11.763 BaseBdev2 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 BaseBdev3_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 true 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 [2024-09-28 16:12:26.347422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:11.763 [2024-09-28 16:12:26.347476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.763 [2024-09-28 16:12:26.347509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:11.763 [2024-09-28 16:12:26.347521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.763 [2024-09-28 16:12:26.349857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.763 [2024-09-28 16:12:26.349897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:11.763 BaseBdev3 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 BaseBdev4_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 true 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 [2024-09-28 16:12:26.419165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:11.763 [2024-09-28 16:12:26.419272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.763 [2024-09-28 16:12:26.419324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:11.763 [2024-09-28 16:12:26.419355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.763 [2024-09-28 16:12:26.421677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.763 [2024-09-28 16:12:26.421761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:11.763 BaseBdev4 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.763 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 [2024-09-28 16:12:26.431226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.763 [2024-09-28 16:12:26.433317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.763 [2024-09-28 16:12:26.433441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.763 [2024-09-28 16:12:26.433536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:11.763 [2024-09-28 16:12:26.433793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:11.763 [2024-09-28 16:12:26.433844] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:11.764 [2024-09-28 16:12:26.434104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:11.764 [2024-09-28 16:12:26.434319] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:11.764 [2024-09-28 16:12:26.434360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:11.764 [2024-09-28 16:12:26.434553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.764 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.022 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.022 "name": "raid_bdev1", 00:11:12.022 "uuid": "ac96a890-edac-44c9-8d77-5a281c0ee71c", 00:11:12.022 "strip_size_kb": 64, 00:11:12.022 "state": "online", 00:11:12.022 "raid_level": "raid0", 00:11:12.022 "superblock": true, 00:11:12.022 "num_base_bdevs": 4, 00:11:12.022 "num_base_bdevs_discovered": 4, 00:11:12.022 "num_base_bdevs_operational": 4, 00:11:12.022 "base_bdevs_list": [ 00:11:12.022 { 00:11:12.022 "name": "BaseBdev1", 00:11:12.022 "uuid": "9f58d513-757a-5afd-9875-8e0a0a85e100", 00:11:12.022 "is_configured": true, 00:11:12.022 "data_offset": 2048, 00:11:12.022 "data_size": 63488 00:11:12.022 }, 00:11:12.022 { 00:11:12.022 "name": "BaseBdev2", 00:11:12.022 "uuid": "6c698e8d-ee09-5424-8f7d-e43f25a2dbff", 00:11:12.022 "is_configured": true, 00:11:12.022 "data_offset": 2048, 00:11:12.022 "data_size": 63488 00:11:12.022 }, 00:11:12.022 { 00:11:12.022 "name": "BaseBdev3", 00:11:12.022 "uuid": "153d4735-0620-57a7-b161-c9bd3307e445", 00:11:12.022 "is_configured": true, 00:11:12.022 "data_offset": 2048, 00:11:12.022 "data_size": 63488 00:11:12.022 }, 00:11:12.022 { 00:11:12.022 "name": "BaseBdev4", 00:11:12.022 "uuid": "93c46e92-f3c9-531f-900e-bbe2bebfd63c", 00:11:12.022 "is_configured": true, 00:11:12.022 "data_offset": 2048, 00:11:12.022 "data_size": 63488 00:11:12.022 } 00:11:12.022 ] 00:11:12.022 }' 00:11:12.022 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.022 16:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.282 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:12.282 16:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.282 [2024-09-28 16:12:26.939672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.222 16:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.482 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.482 "name": "raid_bdev1", 00:11:13.482 "uuid": "ac96a890-edac-44c9-8d77-5a281c0ee71c", 00:11:13.482 "strip_size_kb": 64, 00:11:13.482 "state": "online", 00:11:13.482 "raid_level": "raid0", 00:11:13.482 "superblock": true, 00:11:13.482 "num_base_bdevs": 4, 00:11:13.482 "num_base_bdevs_discovered": 4, 00:11:13.482 "num_base_bdevs_operational": 4, 00:11:13.482 "base_bdevs_list": [ 00:11:13.482 { 00:11:13.482 "name": "BaseBdev1", 00:11:13.482 "uuid": "9f58d513-757a-5afd-9875-8e0a0a85e100", 00:11:13.482 "is_configured": true, 00:11:13.482 "data_offset": 2048, 00:11:13.482 "data_size": 63488 00:11:13.482 }, 00:11:13.482 { 00:11:13.482 "name": "BaseBdev2", 00:11:13.482 "uuid": "6c698e8d-ee09-5424-8f7d-e43f25a2dbff", 00:11:13.482 "is_configured": true, 00:11:13.482 "data_offset": 2048, 00:11:13.482 "data_size": 63488 00:11:13.482 }, 00:11:13.482 { 00:11:13.482 "name": "BaseBdev3", 00:11:13.482 "uuid": "153d4735-0620-57a7-b161-c9bd3307e445", 00:11:13.482 "is_configured": true, 00:11:13.482 "data_offset": 2048, 00:11:13.482 "data_size": 63488 00:11:13.482 }, 00:11:13.482 { 00:11:13.482 "name": "BaseBdev4", 00:11:13.482 "uuid": "93c46e92-f3c9-531f-900e-bbe2bebfd63c", 00:11:13.482 "is_configured": true, 00:11:13.482 "data_offset": 2048, 00:11:13.482 "data_size": 63488 00:11:13.482 } 00:11:13.482 ] 00:11:13.482 }' 00:11:13.482 16:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.482 16:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.741 [2024-09-28 16:12:28.344642] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.741 [2024-09-28 16:12:28.344731] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.741 [2024-09-28 16:12:28.347364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.741 [2024-09-28 16:12:28.347472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.741 [2024-09-28 16:12:28.347539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.741 [2024-09-28 16:12:28.347593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:13.741 { 00:11:13.741 "results": [ 00:11:13.741 { 00:11:13.741 "job": "raid_bdev1", 00:11:13.741 "core_mask": "0x1", 00:11:13.741 "workload": "randrw", 00:11:13.741 "percentage": 50, 00:11:13.741 "status": "finished", 00:11:13.741 "queue_depth": 1, 00:11:13.741 "io_size": 131072, 00:11:13.741 "runtime": 1.405621, 00:11:13.741 "iops": 14310.40088331065, 00:11:13.741 "mibps": 1788.8001104138314, 00:11:13.741 "io_failed": 1, 00:11:13.741 "io_timeout": 0, 00:11:13.741 "avg_latency_us": 98.71194790737738, 00:11:13.741 "min_latency_us": 24.370305676855896, 00:11:13.741 "max_latency_us": 1452.380786026201 00:11:13.741 } 00:11:13.741 ], 00:11:13.741 "core_count": 1 00:11:13.741 } 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71022 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71022 ']' 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71022 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71022 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71022' 00:11:13.741 killing process with pid 71022 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71022 00:11:13.741 [2024-09-28 16:12:28.393199] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.741 16:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71022 00:11:14.311 [2024-09-28 16:12:28.734753] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.691 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pJBrD6TB5d 00:11:15.691 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:15.691 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:15.691 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:15.691 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:15.691 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.691 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:15.692 16:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:15.692 00:11:15.692 real 0m4.913s 00:11:15.692 user 0m5.578s 00:11:15.692 sys 0m0.733s 00:11:15.692 ************************************ 00:11:15.692 END TEST raid_read_error_test 00:11:15.692 ************************************ 00:11:15.692 16:12:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.692 16:12:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.692 16:12:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:15.692 16:12:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:15.692 16:12:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.692 16:12:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.692 ************************************ 00:11:15.692 START TEST raid_write_error_test 00:11:15.692 ************************************ 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3JKvYM2Lpk 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71164 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71164 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71164 ']' 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.692 16:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.692 [2024-09-28 16:12:30.322212] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:15.692 [2024-09-28 16:12:30.322860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:11:15.952 [2024-09-28 16:12:30.490412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.212 [2024-09-28 16:12:30.735679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.471 [2024-09-28 16:12:30.965534] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.471 [2024-09-28 16:12:30.965680] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.471 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.471 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:16.471 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.471 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.471 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.471 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.731 BaseBdev1_malloc 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.731 true 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.731 [2024-09-28 16:12:31.205986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:16.731 [2024-09-28 16:12:31.206090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.731 [2024-09-28 16:12:31.206142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:16.731 [2024-09-28 16:12:31.206173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.731 [2024-09-28 16:12:31.208593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.731 [2024-09-28 16:12:31.208682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:16.731 BaseBdev1 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.731 BaseBdev2_malloc 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.731 true 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.731 [2024-09-28 16:12:31.290585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:16.731 [2024-09-28 16:12:31.290681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.731 [2024-09-28 16:12:31.290714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:16.731 [2024-09-28 16:12:31.290744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.731 [2024-09-28 16:12:31.293075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.731 [2024-09-28 16:12:31.293149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:16.731 BaseBdev2 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.731 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.732 BaseBdev3_malloc 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.732 true 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.732 [2024-09-28 16:12:31.360218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:16.732 [2024-09-28 16:12:31.360320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.732 [2024-09-28 16:12:31.360370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:16.732 [2024-09-28 16:12:31.360400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.732 [2024-09-28 16:12:31.362735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.732 [2024-09-28 16:12:31.362811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:16.732 BaseBdev3 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.732 BaseBdev4_malloc 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:16.732 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.991 true 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.991 [2024-09-28 16:12:31.432590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:16.991 [2024-09-28 16:12:31.432681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.991 [2024-09-28 16:12:31.432730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:16.991 [2024-09-28 16:12:31.432763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.991 [2024-09-28 16:12:31.435074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.991 [2024-09-28 16:12:31.435166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:16.991 BaseBdev4 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.991 [2024-09-28 16:12:31.444653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.991 [2024-09-28 16:12:31.446742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.991 [2024-09-28 16:12:31.446812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.991 [2024-09-28 16:12:31.446868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.991 [2024-09-28 16:12:31.447089] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:16.991 [2024-09-28 16:12:31.447104] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.991 [2024-09-28 16:12:31.447355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.991 [2024-09-28 16:12:31.447530] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:16.991 [2024-09-28 16:12:31.447546] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:16.991 [2024-09-28 16:12:31.447694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.991 "name": "raid_bdev1", 00:11:16.991 "uuid": "0a51af47-ac48-419f-aa64-dd837cc3e179", 00:11:16.991 "strip_size_kb": 64, 00:11:16.991 "state": "online", 00:11:16.991 "raid_level": "raid0", 00:11:16.991 "superblock": true, 00:11:16.991 "num_base_bdevs": 4, 00:11:16.991 "num_base_bdevs_discovered": 4, 00:11:16.991 "num_base_bdevs_operational": 4, 00:11:16.991 "base_bdevs_list": [ 00:11:16.991 { 00:11:16.991 "name": "BaseBdev1", 00:11:16.991 "uuid": "c0cd6542-0e82-5863-a831-0bc36e4a04b1", 00:11:16.991 "is_configured": true, 00:11:16.991 "data_offset": 2048, 00:11:16.991 "data_size": 63488 00:11:16.991 }, 00:11:16.991 { 00:11:16.991 "name": "BaseBdev2", 00:11:16.991 "uuid": "193bd133-aae3-5582-8678-ff6897949041", 00:11:16.991 "is_configured": true, 00:11:16.991 "data_offset": 2048, 00:11:16.991 "data_size": 63488 00:11:16.991 }, 00:11:16.991 { 00:11:16.991 "name": "BaseBdev3", 00:11:16.991 "uuid": "e83774d4-9ac9-58a2-b3a6-12bbcdd90a42", 00:11:16.991 "is_configured": true, 00:11:16.991 "data_offset": 2048, 00:11:16.991 "data_size": 63488 00:11:16.991 }, 00:11:16.991 { 00:11:16.991 "name": "BaseBdev4", 00:11:16.991 "uuid": "d09e0e89-9723-5f7c-84e2-5803e8e8a0e8", 00:11:16.991 "is_configured": true, 00:11:16.991 "data_offset": 2048, 00:11:16.991 "data_size": 63488 00:11:16.991 } 00:11:16.991 ] 00:11:16.991 }' 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.991 16:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.251 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:17.251 16:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:17.510 [2024-09-28 16:12:31.973269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.449 "name": "raid_bdev1", 00:11:18.449 "uuid": "0a51af47-ac48-419f-aa64-dd837cc3e179", 00:11:18.449 "strip_size_kb": 64, 00:11:18.449 "state": "online", 00:11:18.449 "raid_level": "raid0", 00:11:18.449 "superblock": true, 00:11:18.449 "num_base_bdevs": 4, 00:11:18.449 "num_base_bdevs_discovered": 4, 00:11:18.449 "num_base_bdevs_operational": 4, 00:11:18.449 "base_bdevs_list": [ 00:11:18.449 { 00:11:18.449 "name": "BaseBdev1", 00:11:18.449 "uuid": "c0cd6542-0e82-5863-a831-0bc36e4a04b1", 00:11:18.449 "is_configured": true, 00:11:18.449 "data_offset": 2048, 00:11:18.449 "data_size": 63488 00:11:18.449 }, 00:11:18.449 { 00:11:18.449 "name": "BaseBdev2", 00:11:18.449 "uuid": "193bd133-aae3-5582-8678-ff6897949041", 00:11:18.449 "is_configured": true, 00:11:18.449 "data_offset": 2048, 00:11:18.449 "data_size": 63488 00:11:18.449 }, 00:11:18.449 { 00:11:18.449 "name": "BaseBdev3", 00:11:18.449 "uuid": "e83774d4-9ac9-58a2-b3a6-12bbcdd90a42", 00:11:18.449 "is_configured": true, 00:11:18.449 "data_offset": 2048, 00:11:18.449 "data_size": 63488 00:11:18.449 }, 00:11:18.449 { 00:11:18.449 "name": "BaseBdev4", 00:11:18.449 "uuid": "d09e0e89-9723-5f7c-84e2-5803e8e8a0e8", 00:11:18.449 "is_configured": true, 00:11:18.449 "data_offset": 2048, 00:11:18.449 "data_size": 63488 00:11:18.449 } 00:11:18.449 ] 00:11:18.449 }' 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.449 16:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.707 16:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.708 [2024-09-28 16:12:33.354428] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.708 [2024-09-28 16:12:33.354512] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.708 [2024-09-28 16:12:33.357144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.708 [2024-09-28 16:12:33.357278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.708 [2024-09-28 16:12:33.357350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.708 [2024-09-28 16:12:33.357404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:18.708 { 00:11:18.708 "results": [ 00:11:18.708 { 00:11:18.708 "job": "raid_bdev1", 00:11:18.708 "core_mask": "0x1", 00:11:18.708 "workload": "randrw", 00:11:18.708 "percentage": 50, 00:11:18.708 "status": "finished", 00:11:18.708 "queue_depth": 1, 00:11:18.708 "io_size": 131072, 00:11:18.708 "runtime": 1.381688, 00:11:18.708 "iops": 14195.679487699104, 00:11:18.708 "mibps": 1774.459935962388, 00:11:18.708 "io_failed": 1, 00:11:18.708 "io_timeout": 0, 00:11:18.708 "avg_latency_us": 99.38714204773773, 00:11:18.708 "min_latency_us": 24.929257641921396, 00:11:18.708 "max_latency_us": 1395.1441048034935 00:11:18.708 } 00:11:18.708 ], 00:11:18.708 "core_count": 1 00:11:18.708 } 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71164 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71164 ']' 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71164 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.708 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71164 00:11:18.967 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.967 killing process with pid 71164 00:11:18.967 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.967 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71164' 00:11:18.967 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71164 00:11:18.967 [2024-09-28 16:12:33.404733] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.967 16:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71164 00:11:19.226 [2024-09-28 16:12:33.743872] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3JKvYM2Lpk 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:20.638 ************************************ 00:11:20.638 END TEST raid_write_error_test 00:11:20.638 ************************************ 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:20.638 00:11:20.638 real 0m4.933s 00:11:20.638 user 0m5.613s 00:11:20.638 sys 0m0.731s 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.638 16:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.638 16:12:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:20.638 16:12:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:20.638 16:12:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:20.638 16:12:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.638 16:12:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.638 ************************************ 00:11:20.638 START TEST raid_state_function_test 00:11:20.638 ************************************ 00:11:20.638 16:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:11:20.638 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:20.638 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:20.638 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:20.638 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71314 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:20.639 Process raid pid: 71314 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71314' 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71314 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71314 ']' 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.639 16:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.899 [2024-09-28 16:12:35.325859] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:20.899 [2024-09-28 16:12:35.326055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.899 [2024-09-28 16:12:35.493498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.158 [2024-09-28 16:12:35.735618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.417 [2024-09-28 16:12:35.972463] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.417 [2024-09-28 16:12:35.972601] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.676 [2024-09-28 16:12:36.150831] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.676 [2024-09-28 16:12:36.150899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.676 [2024-09-28 16:12:36.150909] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.676 [2024-09-28 16:12:36.150920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.676 [2024-09-28 16:12:36.150925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.676 [2024-09-28 16:12:36.150937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.676 [2024-09-28 16:12:36.150943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.676 [2024-09-28 16:12:36.150953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.676 "name": "Existed_Raid", 00:11:21.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.676 "strip_size_kb": 64, 00:11:21.676 "state": "configuring", 00:11:21.676 "raid_level": "concat", 00:11:21.676 "superblock": false, 00:11:21.676 "num_base_bdevs": 4, 00:11:21.676 "num_base_bdevs_discovered": 0, 00:11:21.676 "num_base_bdevs_operational": 4, 00:11:21.676 "base_bdevs_list": [ 00:11:21.676 { 00:11:21.676 "name": "BaseBdev1", 00:11:21.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.676 "is_configured": false, 00:11:21.676 "data_offset": 0, 00:11:21.676 "data_size": 0 00:11:21.676 }, 00:11:21.676 { 00:11:21.676 "name": "BaseBdev2", 00:11:21.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.676 "is_configured": false, 00:11:21.676 "data_offset": 0, 00:11:21.676 "data_size": 0 00:11:21.676 }, 00:11:21.676 { 00:11:21.676 "name": "BaseBdev3", 00:11:21.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.676 "is_configured": false, 00:11:21.676 "data_offset": 0, 00:11:21.676 "data_size": 0 00:11:21.676 }, 00:11:21.676 { 00:11:21.676 "name": "BaseBdev4", 00:11:21.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.676 "is_configured": false, 00:11:21.676 "data_offset": 0, 00:11:21.676 "data_size": 0 00:11:21.676 } 00:11:21.676 ] 00:11:21.676 }' 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.676 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.936 [2024-09-28 16:12:36.542074] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.936 [2024-09-28 16:12:36.542183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.936 [2024-09-28 16:12:36.550088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.936 [2024-09-28 16:12:36.550185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.936 [2024-09-28 16:12:36.550213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.936 [2024-09-28 16:12:36.550244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.936 [2024-09-28 16:12:36.550264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.936 [2024-09-28 16:12:36.550285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.936 [2024-09-28 16:12:36.550303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.936 [2024-09-28 16:12:36.550356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.936 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.195 [2024-09-28 16:12:36.635102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.195 BaseBdev1 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.195 [ 00:11:22.195 { 00:11:22.195 "name": "BaseBdev1", 00:11:22.195 "aliases": [ 00:11:22.195 "11a02dbf-2743-4a12-bde3-4eee0b10d711" 00:11:22.195 ], 00:11:22.195 "product_name": "Malloc disk", 00:11:22.195 "block_size": 512, 00:11:22.195 "num_blocks": 65536, 00:11:22.195 "uuid": "11a02dbf-2743-4a12-bde3-4eee0b10d711", 00:11:22.195 "assigned_rate_limits": { 00:11:22.195 "rw_ios_per_sec": 0, 00:11:22.195 "rw_mbytes_per_sec": 0, 00:11:22.195 "r_mbytes_per_sec": 0, 00:11:22.195 "w_mbytes_per_sec": 0 00:11:22.195 }, 00:11:22.195 "claimed": true, 00:11:22.195 "claim_type": "exclusive_write", 00:11:22.195 "zoned": false, 00:11:22.195 "supported_io_types": { 00:11:22.195 "read": true, 00:11:22.195 "write": true, 00:11:22.195 "unmap": true, 00:11:22.195 "flush": true, 00:11:22.195 "reset": true, 00:11:22.195 "nvme_admin": false, 00:11:22.195 "nvme_io": false, 00:11:22.195 "nvme_io_md": false, 00:11:22.195 "write_zeroes": true, 00:11:22.195 "zcopy": true, 00:11:22.195 "get_zone_info": false, 00:11:22.195 "zone_management": false, 00:11:22.195 "zone_append": false, 00:11:22.195 "compare": false, 00:11:22.195 "compare_and_write": false, 00:11:22.195 "abort": true, 00:11:22.195 "seek_hole": false, 00:11:22.195 "seek_data": false, 00:11:22.195 "copy": true, 00:11:22.195 "nvme_iov_md": false 00:11:22.195 }, 00:11:22.195 "memory_domains": [ 00:11:22.195 { 00:11:22.195 "dma_device_id": "system", 00:11:22.195 "dma_device_type": 1 00:11:22.195 }, 00:11:22.195 { 00:11:22.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.195 "dma_device_type": 2 00:11:22.195 } 00:11:22.195 ], 00:11:22.195 "driver_specific": {} 00:11:22.195 } 00:11:22.195 ] 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.195 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.195 "name": "Existed_Raid", 00:11:22.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.195 "strip_size_kb": 64, 00:11:22.195 "state": "configuring", 00:11:22.195 "raid_level": "concat", 00:11:22.195 "superblock": false, 00:11:22.196 "num_base_bdevs": 4, 00:11:22.196 "num_base_bdevs_discovered": 1, 00:11:22.196 "num_base_bdevs_operational": 4, 00:11:22.196 "base_bdevs_list": [ 00:11:22.196 { 00:11:22.196 "name": "BaseBdev1", 00:11:22.196 "uuid": "11a02dbf-2743-4a12-bde3-4eee0b10d711", 00:11:22.196 "is_configured": true, 00:11:22.196 "data_offset": 0, 00:11:22.196 "data_size": 65536 00:11:22.196 }, 00:11:22.196 { 00:11:22.196 "name": "BaseBdev2", 00:11:22.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.196 "is_configured": false, 00:11:22.196 "data_offset": 0, 00:11:22.196 "data_size": 0 00:11:22.196 }, 00:11:22.196 { 00:11:22.196 "name": "BaseBdev3", 00:11:22.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.196 "is_configured": false, 00:11:22.196 "data_offset": 0, 00:11:22.196 "data_size": 0 00:11:22.196 }, 00:11:22.196 { 00:11:22.196 "name": "BaseBdev4", 00:11:22.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.196 "is_configured": false, 00:11:22.196 "data_offset": 0, 00:11:22.196 "data_size": 0 00:11:22.196 } 00:11:22.196 ] 00:11:22.196 }' 00:11:22.196 16:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.196 16:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.455 [2024-09-28 16:12:37.102332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.455 [2024-09-28 16:12:37.102383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.455 [2024-09-28 16:12:37.114370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.455 [2024-09-28 16:12:37.116500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.455 [2024-09-28 16:12:37.116543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.455 [2024-09-28 16:12:37.116553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.455 [2024-09-28 16:12:37.116564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.455 [2024-09-28 16:12:37.116571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.455 [2024-09-28 16:12:37.116579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.455 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.715 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.715 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.715 "name": "Existed_Raid", 00:11:22.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.715 "strip_size_kb": 64, 00:11:22.715 "state": "configuring", 00:11:22.715 "raid_level": "concat", 00:11:22.715 "superblock": false, 00:11:22.715 "num_base_bdevs": 4, 00:11:22.715 "num_base_bdevs_discovered": 1, 00:11:22.715 "num_base_bdevs_operational": 4, 00:11:22.715 "base_bdevs_list": [ 00:11:22.715 { 00:11:22.715 "name": "BaseBdev1", 00:11:22.715 "uuid": "11a02dbf-2743-4a12-bde3-4eee0b10d711", 00:11:22.715 "is_configured": true, 00:11:22.715 "data_offset": 0, 00:11:22.715 "data_size": 65536 00:11:22.715 }, 00:11:22.715 { 00:11:22.715 "name": "BaseBdev2", 00:11:22.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.715 "is_configured": false, 00:11:22.715 "data_offset": 0, 00:11:22.715 "data_size": 0 00:11:22.715 }, 00:11:22.715 { 00:11:22.715 "name": "BaseBdev3", 00:11:22.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.715 "is_configured": false, 00:11:22.715 "data_offset": 0, 00:11:22.715 "data_size": 0 00:11:22.715 }, 00:11:22.715 { 00:11:22.715 "name": "BaseBdev4", 00:11:22.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.715 "is_configured": false, 00:11:22.715 "data_offset": 0, 00:11:22.715 "data_size": 0 00:11:22.715 } 00:11:22.715 ] 00:11:22.715 }' 00:11:22.715 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.715 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 [2024-09-28 16:12:37.609964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.974 BaseBdev2 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 [ 00:11:22.974 { 00:11:22.974 "name": "BaseBdev2", 00:11:22.974 "aliases": [ 00:11:22.974 "b7ad2b8e-61e4-4b3f-82b9-80bbff1e5e61" 00:11:22.974 ], 00:11:22.974 "product_name": "Malloc disk", 00:11:22.974 "block_size": 512, 00:11:22.974 "num_blocks": 65536, 00:11:22.974 "uuid": "b7ad2b8e-61e4-4b3f-82b9-80bbff1e5e61", 00:11:22.974 "assigned_rate_limits": { 00:11:22.974 "rw_ios_per_sec": 0, 00:11:22.974 "rw_mbytes_per_sec": 0, 00:11:22.974 "r_mbytes_per_sec": 0, 00:11:22.974 "w_mbytes_per_sec": 0 00:11:22.974 }, 00:11:22.974 "claimed": true, 00:11:22.974 "claim_type": "exclusive_write", 00:11:22.974 "zoned": false, 00:11:22.974 "supported_io_types": { 00:11:22.974 "read": true, 00:11:22.974 "write": true, 00:11:22.974 "unmap": true, 00:11:22.974 "flush": true, 00:11:22.974 "reset": true, 00:11:22.974 "nvme_admin": false, 00:11:22.974 "nvme_io": false, 00:11:22.974 "nvme_io_md": false, 00:11:22.974 "write_zeroes": true, 00:11:22.974 "zcopy": true, 00:11:22.974 "get_zone_info": false, 00:11:22.974 "zone_management": false, 00:11:22.974 "zone_append": false, 00:11:22.974 "compare": false, 00:11:22.974 "compare_and_write": false, 00:11:22.974 "abort": true, 00:11:22.974 "seek_hole": false, 00:11:22.974 "seek_data": false, 00:11:22.974 "copy": true, 00:11:22.974 "nvme_iov_md": false 00:11:22.974 }, 00:11:22.974 "memory_domains": [ 00:11:22.974 { 00:11:22.974 "dma_device_id": "system", 00:11:22.974 "dma_device_type": 1 00:11:22.974 }, 00:11:22.974 { 00:11:22.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.974 "dma_device_type": 2 00:11:22.974 } 00:11:22.974 ], 00:11:22.974 "driver_specific": {} 00:11:22.974 } 00:11:22.974 ] 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.974 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.235 "name": "Existed_Raid", 00:11:23.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.235 "strip_size_kb": 64, 00:11:23.235 "state": "configuring", 00:11:23.235 "raid_level": "concat", 00:11:23.235 "superblock": false, 00:11:23.235 "num_base_bdevs": 4, 00:11:23.235 "num_base_bdevs_discovered": 2, 00:11:23.235 "num_base_bdevs_operational": 4, 00:11:23.235 "base_bdevs_list": [ 00:11:23.235 { 00:11:23.235 "name": "BaseBdev1", 00:11:23.235 "uuid": "11a02dbf-2743-4a12-bde3-4eee0b10d711", 00:11:23.235 "is_configured": true, 00:11:23.235 "data_offset": 0, 00:11:23.235 "data_size": 65536 00:11:23.235 }, 00:11:23.235 { 00:11:23.235 "name": "BaseBdev2", 00:11:23.235 "uuid": "b7ad2b8e-61e4-4b3f-82b9-80bbff1e5e61", 00:11:23.235 "is_configured": true, 00:11:23.235 "data_offset": 0, 00:11:23.235 "data_size": 65536 00:11:23.235 }, 00:11:23.235 { 00:11:23.235 "name": "BaseBdev3", 00:11:23.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.235 "is_configured": false, 00:11:23.235 "data_offset": 0, 00:11:23.235 "data_size": 0 00:11:23.235 }, 00:11:23.235 { 00:11:23.235 "name": "BaseBdev4", 00:11:23.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.235 "is_configured": false, 00:11:23.235 "data_offset": 0, 00:11:23.235 "data_size": 0 00:11:23.235 } 00:11:23.235 ] 00:11:23.235 }' 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.235 16:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.495 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.495 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.495 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.495 [2024-09-28 16:12:38.144929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.496 BaseBdev3 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.496 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.496 [ 00:11:23.496 { 00:11:23.496 "name": "BaseBdev3", 00:11:23.496 "aliases": [ 00:11:23.496 "85ba30fe-6602-4ef3-9730-0f97abe39a12" 00:11:23.496 ], 00:11:23.496 "product_name": "Malloc disk", 00:11:23.496 "block_size": 512, 00:11:23.496 "num_blocks": 65536, 00:11:23.496 "uuid": "85ba30fe-6602-4ef3-9730-0f97abe39a12", 00:11:23.496 "assigned_rate_limits": { 00:11:23.496 "rw_ios_per_sec": 0, 00:11:23.496 "rw_mbytes_per_sec": 0, 00:11:23.496 "r_mbytes_per_sec": 0, 00:11:23.496 "w_mbytes_per_sec": 0 00:11:23.496 }, 00:11:23.496 "claimed": true, 00:11:23.496 "claim_type": "exclusive_write", 00:11:23.496 "zoned": false, 00:11:23.496 "supported_io_types": { 00:11:23.496 "read": true, 00:11:23.496 "write": true, 00:11:23.496 "unmap": true, 00:11:23.496 "flush": true, 00:11:23.496 "reset": true, 00:11:23.496 "nvme_admin": false, 00:11:23.496 "nvme_io": false, 00:11:23.756 "nvme_io_md": false, 00:11:23.756 "write_zeroes": true, 00:11:23.756 "zcopy": true, 00:11:23.756 "get_zone_info": false, 00:11:23.756 "zone_management": false, 00:11:23.756 "zone_append": false, 00:11:23.756 "compare": false, 00:11:23.756 "compare_and_write": false, 00:11:23.756 "abort": true, 00:11:23.756 "seek_hole": false, 00:11:23.756 "seek_data": false, 00:11:23.756 "copy": true, 00:11:23.756 "nvme_iov_md": false 00:11:23.756 }, 00:11:23.756 "memory_domains": [ 00:11:23.756 { 00:11:23.756 "dma_device_id": "system", 00:11:23.756 "dma_device_type": 1 00:11:23.756 }, 00:11:23.756 { 00:11:23.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.756 "dma_device_type": 2 00:11:23.756 } 00:11:23.756 ], 00:11:23.756 "driver_specific": {} 00:11:23.756 } 00:11:23.756 ] 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.756 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.756 "name": "Existed_Raid", 00:11:23.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.756 "strip_size_kb": 64, 00:11:23.756 "state": "configuring", 00:11:23.756 "raid_level": "concat", 00:11:23.756 "superblock": false, 00:11:23.756 "num_base_bdevs": 4, 00:11:23.756 "num_base_bdevs_discovered": 3, 00:11:23.756 "num_base_bdevs_operational": 4, 00:11:23.756 "base_bdevs_list": [ 00:11:23.756 { 00:11:23.756 "name": "BaseBdev1", 00:11:23.756 "uuid": "11a02dbf-2743-4a12-bde3-4eee0b10d711", 00:11:23.756 "is_configured": true, 00:11:23.756 "data_offset": 0, 00:11:23.756 "data_size": 65536 00:11:23.756 }, 00:11:23.756 { 00:11:23.756 "name": "BaseBdev2", 00:11:23.756 "uuid": "b7ad2b8e-61e4-4b3f-82b9-80bbff1e5e61", 00:11:23.756 "is_configured": true, 00:11:23.756 "data_offset": 0, 00:11:23.756 "data_size": 65536 00:11:23.756 }, 00:11:23.756 { 00:11:23.756 "name": "BaseBdev3", 00:11:23.756 "uuid": "85ba30fe-6602-4ef3-9730-0f97abe39a12", 00:11:23.756 "is_configured": true, 00:11:23.756 "data_offset": 0, 00:11:23.756 "data_size": 65536 00:11:23.756 }, 00:11:23.756 { 00:11:23.756 "name": "BaseBdev4", 00:11:23.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.757 "is_configured": false, 00:11:23.757 "data_offset": 0, 00:11:23.757 "data_size": 0 00:11:23.757 } 00:11:23.757 ] 00:11:23.757 }' 00:11:23.757 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.757 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.015 [2024-09-28 16:12:38.690827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.015 [2024-09-28 16:12:38.690977] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.015 [2024-09-28 16:12:38.690991] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:24.015 [2024-09-28 16:12:38.691367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.015 [2024-09-28 16:12:38.691565] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.015 [2024-09-28 16:12:38.691578] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.015 [2024-09-28 16:12:38.691877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.015 BaseBdev4 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.015 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.274 [ 00:11:24.274 { 00:11:24.274 "name": "BaseBdev4", 00:11:24.274 "aliases": [ 00:11:24.274 "f1512572-9faf-458c-8042-c349541e4bae" 00:11:24.274 ], 00:11:24.274 "product_name": "Malloc disk", 00:11:24.274 "block_size": 512, 00:11:24.274 "num_blocks": 65536, 00:11:24.274 "uuid": "f1512572-9faf-458c-8042-c349541e4bae", 00:11:24.274 "assigned_rate_limits": { 00:11:24.274 "rw_ios_per_sec": 0, 00:11:24.274 "rw_mbytes_per_sec": 0, 00:11:24.274 "r_mbytes_per_sec": 0, 00:11:24.274 "w_mbytes_per_sec": 0 00:11:24.274 }, 00:11:24.274 "claimed": true, 00:11:24.274 "claim_type": "exclusive_write", 00:11:24.274 "zoned": false, 00:11:24.274 "supported_io_types": { 00:11:24.274 "read": true, 00:11:24.274 "write": true, 00:11:24.274 "unmap": true, 00:11:24.274 "flush": true, 00:11:24.274 "reset": true, 00:11:24.274 "nvme_admin": false, 00:11:24.274 "nvme_io": false, 00:11:24.274 "nvme_io_md": false, 00:11:24.274 "write_zeroes": true, 00:11:24.274 "zcopy": true, 00:11:24.274 "get_zone_info": false, 00:11:24.274 "zone_management": false, 00:11:24.274 "zone_append": false, 00:11:24.274 "compare": false, 00:11:24.274 "compare_and_write": false, 00:11:24.274 "abort": true, 00:11:24.274 "seek_hole": false, 00:11:24.274 "seek_data": false, 00:11:24.274 "copy": true, 00:11:24.274 "nvme_iov_md": false 00:11:24.274 }, 00:11:24.274 "memory_domains": [ 00:11:24.274 { 00:11:24.274 "dma_device_id": "system", 00:11:24.274 "dma_device_type": 1 00:11:24.274 }, 00:11:24.274 { 00:11:24.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.274 "dma_device_type": 2 00:11:24.274 } 00:11:24.274 ], 00:11:24.274 "driver_specific": {} 00:11:24.274 } 00:11:24.274 ] 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.274 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.275 "name": "Existed_Raid", 00:11:24.275 "uuid": "6e12ae64-888a-4c2f-8914-064aed85faec", 00:11:24.275 "strip_size_kb": 64, 00:11:24.275 "state": "online", 00:11:24.275 "raid_level": "concat", 00:11:24.275 "superblock": false, 00:11:24.275 "num_base_bdevs": 4, 00:11:24.275 "num_base_bdevs_discovered": 4, 00:11:24.275 "num_base_bdevs_operational": 4, 00:11:24.275 "base_bdevs_list": [ 00:11:24.275 { 00:11:24.275 "name": "BaseBdev1", 00:11:24.275 "uuid": "11a02dbf-2743-4a12-bde3-4eee0b10d711", 00:11:24.275 "is_configured": true, 00:11:24.275 "data_offset": 0, 00:11:24.275 "data_size": 65536 00:11:24.275 }, 00:11:24.275 { 00:11:24.275 "name": "BaseBdev2", 00:11:24.275 "uuid": "b7ad2b8e-61e4-4b3f-82b9-80bbff1e5e61", 00:11:24.275 "is_configured": true, 00:11:24.275 "data_offset": 0, 00:11:24.275 "data_size": 65536 00:11:24.275 }, 00:11:24.275 { 00:11:24.275 "name": "BaseBdev3", 00:11:24.275 "uuid": "85ba30fe-6602-4ef3-9730-0f97abe39a12", 00:11:24.275 "is_configured": true, 00:11:24.275 "data_offset": 0, 00:11:24.275 "data_size": 65536 00:11:24.275 }, 00:11:24.275 { 00:11:24.275 "name": "BaseBdev4", 00:11:24.275 "uuid": "f1512572-9faf-458c-8042-c349541e4bae", 00:11:24.275 "is_configured": true, 00:11:24.275 "data_offset": 0, 00:11:24.275 "data_size": 65536 00:11:24.275 } 00:11:24.275 ] 00:11:24.275 }' 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.275 16:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 [2024-09-28 16:12:39.186418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.534 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.534 "name": "Existed_Raid", 00:11:24.534 "aliases": [ 00:11:24.534 "6e12ae64-888a-4c2f-8914-064aed85faec" 00:11:24.534 ], 00:11:24.534 "product_name": "Raid Volume", 00:11:24.534 "block_size": 512, 00:11:24.534 "num_blocks": 262144, 00:11:24.534 "uuid": "6e12ae64-888a-4c2f-8914-064aed85faec", 00:11:24.534 "assigned_rate_limits": { 00:11:24.534 "rw_ios_per_sec": 0, 00:11:24.534 "rw_mbytes_per_sec": 0, 00:11:24.534 "r_mbytes_per_sec": 0, 00:11:24.534 "w_mbytes_per_sec": 0 00:11:24.534 }, 00:11:24.534 "claimed": false, 00:11:24.534 "zoned": false, 00:11:24.534 "supported_io_types": { 00:11:24.534 "read": true, 00:11:24.534 "write": true, 00:11:24.534 "unmap": true, 00:11:24.534 "flush": true, 00:11:24.534 "reset": true, 00:11:24.534 "nvme_admin": false, 00:11:24.534 "nvme_io": false, 00:11:24.534 "nvme_io_md": false, 00:11:24.534 "write_zeroes": true, 00:11:24.534 "zcopy": false, 00:11:24.534 "get_zone_info": false, 00:11:24.534 "zone_management": false, 00:11:24.534 "zone_append": false, 00:11:24.534 "compare": false, 00:11:24.534 "compare_and_write": false, 00:11:24.534 "abort": false, 00:11:24.534 "seek_hole": false, 00:11:24.534 "seek_data": false, 00:11:24.534 "copy": false, 00:11:24.534 "nvme_iov_md": false 00:11:24.534 }, 00:11:24.534 "memory_domains": [ 00:11:24.534 { 00:11:24.534 "dma_device_id": "system", 00:11:24.534 "dma_device_type": 1 00:11:24.534 }, 00:11:24.534 { 00:11:24.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.534 "dma_device_type": 2 00:11:24.534 }, 00:11:24.534 { 00:11:24.534 "dma_device_id": "system", 00:11:24.534 "dma_device_type": 1 00:11:24.534 }, 00:11:24.535 { 00:11:24.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.535 "dma_device_type": 2 00:11:24.535 }, 00:11:24.535 { 00:11:24.535 "dma_device_id": "system", 00:11:24.535 "dma_device_type": 1 00:11:24.535 }, 00:11:24.535 { 00:11:24.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.535 "dma_device_type": 2 00:11:24.535 }, 00:11:24.535 { 00:11:24.535 "dma_device_id": "system", 00:11:24.535 "dma_device_type": 1 00:11:24.535 }, 00:11:24.535 { 00:11:24.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.535 "dma_device_type": 2 00:11:24.535 } 00:11:24.535 ], 00:11:24.535 "driver_specific": { 00:11:24.535 "raid": { 00:11:24.535 "uuid": "6e12ae64-888a-4c2f-8914-064aed85faec", 00:11:24.535 "strip_size_kb": 64, 00:11:24.535 "state": "online", 00:11:24.535 "raid_level": "concat", 00:11:24.535 "superblock": false, 00:11:24.535 "num_base_bdevs": 4, 00:11:24.535 "num_base_bdevs_discovered": 4, 00:11:24.535 "num_base_bdevs_operational": 4, 00:11:24.535 "base_bdevs_list": [ 00:11:24.535 { 00:11:24.535 "name": "BaseBdev1", 00:11:24.535 "uuid": "11a02dbf-2743-4a12-bde3-4eee0b10d711", 00:11:24.535 "is_configured": true, 00:11:24.535 "data_offset": 0, 00:11:24.535 "data_size": 65536 00:11:24.535 }, 00:11:24.535 { 00:11:24.535 "name": "BaseBdev2", 00:11:24.535 "uuid": "b7ad2b8e-61e4-4b3f-82b9-80bbff1e5e61", 00:11:24.535 "is_configured": true, 00:11:24.535 "data_offset": 0, 00:11:24.535 "data_size": 65536 00:11:24.535 }, 00:11:24.535 { 00:11:24.535 "name": "BaseBdev3", 00:11:24.535 "uuid": "85ba30fe-6602-4ef3-9730-0f97abe39a12", 00:11:24.535 "is_configured": true, 00:11:24.535 "data_offset": 0, 00:11:24.535 "data_size": 65536 00:11:24.535 }, 00:11:24.535 { 00:11:24.535 "name": "BaseBdev4", 00:11:24.535 "uuid": "f1512572-9faf-458c-8042-c349541e4bae", 00:11:24.535 "is_configured": true, 00:11:24.535 "data_offset": 0, 00:11:24.535 "data_size": 65536 00:11:24.535 } 00:11:24.535 ] 00:11:24.535 } 00:11:24.535 } 00:11:24.535 }' 00:11:24.535 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:24.795 BaseBdev2 00:11:24.795 BaseBdev3 00:11:24.795 BaseBdev4' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.795 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 [2024-09-28 16:12:39.493539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.055 [2024-09-28 16:12:39.493575] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.055 [2024-09-28 16:12:39.493637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.055 "name": "Existed_Raid", 00:11:25.055 "uuid": "6e12ae64-888a-4c2f-8914-064aed85faec", 00:11:25.055 "strip_size_kb": 64, 00:11:25.055 "state": "offline", 00:11:25.055 "raid_level": "concat", 00:11:25.055 "superblock": false, 00:11:25.055 "num_base_bdevs": 4, 00:11:25.055 "num_base_bdevs_discovered": 3, 00:11:25.055 "num_base_bdevs_operational": 3, 00:11:25.055 "base_bdevs_list": [ 00:11:25.055 { 00:11:25.055 "name": null, 00:11:25.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.055 "is_configured": false, 00:11:25.055 "data_offset": 0, 00:11:25.055 "data_size": 65536 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev2", 00:11:25.055 "uuid": "b7ad2b8e-61e4-4b3f-82b9-80bbff1e5e61", 00:11:25.055 "is_configured": true, 00:11:25.055 "data_offset": 0, 00:11:25.055 "data_size": 65536 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev3", 00:11:25.055 "uuid": "85ba30fe-6602-4ef3-9730-0f97abe39a12", 00:11:25.055 "is_configured": true, 00:11:25.055 "data_offset": 0, 00:11:25.055 "data_size": 65536 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev4", 00:11:25.055 "uuid": "f1512572-9faf-458c-8042-c349541e4bae", 00:11:25.055 "is_configured": true, 00:11:25.055 "data_offset": 0, 00:11:25.055 "data_size": 65536 00:11:25.055 } 00:11:25.055 ] 00:11:25.055 }' 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.055 16:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.624 [2024-09-28 16:12:40.094845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.624 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.625 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.625 [2024-09-28 16:12:40.256353] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.883 [2024-09-28 16:12:40.416538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:25.883 [2024-09-28 16:12:40.416653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.883 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.884 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.143 BaseBdev2 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.143 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 [ 00:11:26.144 { 00:11:26.144 "name": "BaseBdev2", 00:11:26.144 "aliases": [ 00:11:26.144 "3a63d1fe-1441-4e32-b369-273c619667aa" 00:11:26.144 ], 00:11:26.144 "product_name": "Malloc disk", 00:11:26.144 "block_size": 512, 00:11:26.144 "num_blocks": 65536, 00:11:26.144 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:26.144 "assigned_rate_limits": { 00:11:26.144 "rw_ios_per_sec": 0, 00:11:26.144 "rw_mbytes_per_sec": 0, 00:11:26.144 "r_mbytes_per_sec": 0, 00:11:26.144 "w_mbytes_per_sec": 0 00:11:26.144 }, 00:11:26.144 "claimed": false, 00:11:26.144 "zoned": false, 00:11:26.144 "supported_io_types": { 00:11:26.144 "read": true, 00:11:26.144 "write": true, 00:11:26.144 "unmap": true, 00:11:26.144 "flush": true, 00:11:26.144 "reset": true, 00:11:26.144 "nvme_admin": false, 00:11:26.144 "nvme_io": false, 00:11:26.144 "nvme_io_md": false, 00:11:26.144 "write_zeroes": true, 00:11:26.144 "zcopy": true, 00:11:26.144 "get_zone_info": false, 00:11:26.144 "zone_management": false, 00:11:26.144 "zone_append": false, 00:11:26.144 "compare": false, 00:11:26.144 "compare_and_write": false, 00:11:26.144 "abort": true, 00:11:26.144 "seek_hole": false, 00:11:26.144 "seek_data": false, 00:11:26.144 "copy": true, 00:11:26.144 "nvme_iov_md": false 00:11:26.144 }, 00:11:26.144 "memory_domains": [ 00:11:26.144 { 00:11:26.144 "dma_device_id": "system", 00:11:26.144 "dma_device_type": 1 00:11:26.144 }, 00:11:26.144 { 00:11:26.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.144 "dma_device_type": 2 00:11:26.144 } 00:11:26.144 ], 00:11:26.144 "driver_specific": {} 00:11:26.144 } 00:11:26.144 ] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 BaseBdev3 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 [ 00:11:26.144 { 00:11:26.144 "name": "BaseBdev3", 00:11:26.144 "aliases": [ 00:11:26.144 "45ace435-cb03-4071-8ad2-737b06d500e2" 00:11:26.144 ], 00:11:26.144 "product_name": "Malloc disk", 00:11:26.144 "block_size": 512, 00:11:26.144 "num_blocks": 65536, 00:11:26.144 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:26.144 "assigned_rate_limits": { 00:11:26.144 "rw_ios_per_sec": 0, 00:11:26.144 "rw_mbytes_per_sec": 0, 00:11:26.144 "r_mbytes_per_sec": 0, 00:11:26.144 "w_mbytes_per_sec": 0 00:11:26.144 }, 00:11:26.144 "claimed": false, 00:11:26.144 "zoned": false, 00:11:26.144 "supported_io_types": { 00:11:26.144 "read": true, 00:11:26.144 "write": true, 00:11:26.144 "unmap": true, 00:11:26.144 "flush": true, 00:11:26.144 "reset": true, 00:11:26.144 "nvme_admin": false, 00:11:26.144 "nvme_io": false, 00:11:26.144 "nvme_io_md": false, 00:11:26.144 "write_zeroes": true, 00:11:26.144 "zcopy": true, 00:11:26.144 "get_zone_info": false, 00:11:26.144 "zone_management": false, 00:11:26.144 "zone_append": false, 00:11:26.144 "compare": false, 00:11:26.144 "compare_and_write": false, 00:11:26.144 "abort": true, 00:11:26.144 "seek_hole": false, 00:11:26.144 "seek_data": false, 00:11:26.144 "copy": true, 00:11:26.144 "nvme_iov_md": false 00:11:26.144 }, 00:11:26.144 "memory_domains": [ 00:11:26.144 { 00:11:26.144 "dma_device_id": "system", 00:11:26.144 "dma_device_type": 1 00:11:26.144 }, 00:11:26.144 { 00:11:26.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.144 "dma_device_type": 2 00:11:26.144 } 00:11:26.144 ], 00:11:26.144 "driver_specific": {} 00:11:26.144 } 00:11:26.144 ] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 BaseBdev4 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.144 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 [ 00:11:26.144 { 00:11:26.144 "name": "BaseBdev4", 00:11:26.144 "aliases": [ 00:11:26.144 "16d068fa-9e0e-4d70-a60b-cb199abd380d" 00:11:26.144 ], 00:11:26.144 "product_name": "Malloc disk", 00:11:26.144 "block_size": 512, 00:11:26.144 "num_blocks": 65536, 00:11:26.144 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:26.144 "assigned_rate_limits": { 00:11:26.144 "rw_ios_per_sec": 0, 00:11:26.144 "rw_mbytes_per_sec": 0, 00:11:26.144 "r_mbytes_per_sec": 0, 00:11:26.144 "w_mbytes_per_sec": 0 00:11:26.144 }, 00:11:26.144 "claimed": false, 00:11:26.144 "zoned": false, 00:11:26.144 "supported_io_types": { 00:11:26.144 "read": true, 00:11:26.144 "write": true, 00:11:26.144 "unmap": true, 00:11:26.144 "flush": true, 00:11:26.144 "reset": true, 00:11:26.144 "nvme_admin": false, 00:11:26.144 "nvme_io": false, 00:11:26.144 "nvme_io_md": false, 00:11:26.144 "write_zeroes": true, 00:11:26.144 "zcopy": true, 00:11:26.144 "get_zone_info": false, 00:11:26.404 "zone_management": false, 00:11:26.404 "zone_append": false, 00:11:26.404 "compare": false, 00:11:26.404 "compare_and_write": false, 00:11:26.404 "abort": true, 00:11:26.404 "seek_hole": false, 00:11:26.404 "seek_data": false, 00:11:26.404 "copy": true, 00:11:26.404 "nvme_iov_md": false 00:11:26.404 }, 00:11:26.404 "memory_domains": [ 00:11:26.404 { 00:11:26.404 "dma_device_id": "system", 00:11:26.404 "dma_device_type": 1 00:11:26.404 }, 00:11:26.404 { 00:11:26.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.404 "dma_device_type": 2 00:11:26.404 } 00:11:26.404 ], 00:11:26.404 "driver_specific": {} 00:11:26.404 } 00:11:26.404 ] 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.404 [2024-09-28 16:12:40.843516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.404 [2024-09-28 16:12:40.843568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.404 [2024-09-28 16:12:40.843607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.404 [2024-09-28 16:12:40.845711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.404 [2024-09-28 16:12:40.845763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.404 "name": "Existed_Raid", 00:11:26.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.404 "strip_size_kb": 64, 00:11:26.404 "state": "configuring", 00:11:26.404 "raid_level": "concat", 00:11:26.404 "superblock": false, 00:11:26.404 "num_base_bdevs": 4, 00:11:26.404 "num_base_bdevs_discovered": 3, 00:11:26.404 "num_base_bdevs_operational": 4, 00:11:26.404 "base_bdevs_list": [ 00:11:26.404 { 00:11:26.404 "name": "BaseBdev1", 00:11:26.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.404 "is_configured": false, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 0 00:11:26.404 }, 00:11:26.404 { 00:11:26.404 "name": "BaseBdev2", 00:11:26.404 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:26.404 "is_configured": true, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 65536 00:11:26.404 }, 00:11:26.404 { 00:11:26.404 "name": "BaseBdev3", 00:11:26.404 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:26.404 "is_configured": true, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 65536 00:11:26.404 }, 00:11:26.404 { 00:11:26.404 "name": "BaseBdev4", 00:11:26.404 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:26.404 "is_configured": true, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 65536 00:11:26.404 } 00:11:26.404 ] 00:11:26.404 }' 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.404 16:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.664 [2024-09-28 16:12:41.258842] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.664 "name": "Existed_Raid", 00:11:26.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.664 "strip_size_kb": 64, 00:11:26.664 "state": "configuring", 00:11:26.664 "raid_level": "concat", 00:11:26.664 "superblock": false, 00:11:26.664 "num_base_bdevs": 4, 00:11:26.664 "num_base_bdevs_discovered": 2, 00:11:26.664 "num_base_bdevs_operational": 4, 00:11:26.664 "base_bdevs_list": [ 00:11:26.664 { 00:11:26.664 "name": "BaseBdev1", 00:11:26.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.664 "is_configured": false, 00:11:26.664 "data_offset": 0, 00:11:26.664 "data_size": 0 00:11:26.664 }, 00:11:26.664 { 00:11:26.664 "name": null, 00:11:26.664 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:26.664 "is_configured": false, 00:11:26.664 "data_offset": 0, 00:11:26.664 "data_size": 65536 00:11:26.664 }, 00:11:26.664 { 00:11:26.664 "name": "BaseBdev3", 00:11:26.664 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:26.664 "is_configured": true, 00:11:26.664 "data_offset": 0, 00:11:26.664 "data_size": 65536 00:11:26.664 }, 00:11:26.664 { 00:11:26.664 "name": "BaseBdev4", 00:11:26.664 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:26.664 "is_configured": true, 00:11:26.664 "data_offset": 0, 00:11:26.664 "data_size": 65536 00:11:26.664 } 00:11:26.664 ] 00:11:26.664 }' 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.664 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.233 [2024-09-28 16:12:41.803343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.233 BaseBdev1 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.233 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.234 [ 00:11:27.234 { 00:11:27.234 "name": "BaseBdev1", 00:11:27.234 "aliases": [ 00:11:27.234 "72124bb3-3c4c-4a17-8e88-c35d11fb5b88" 00:11:27.234 ], 00:11:27.234 "product_name": "Malloc disk", 00:11:27.234 "block_size": 512, 00:11:27.234 "num_blocks": 65536, 00:11:27.234 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:27.234 "assigned_rate_limits": { 00:11:27.234 "rw_ios_per_sec": 0, 00:11:27.234 "rw_mbytes_per_sec": 0, 00:11:27.234 "r_mbytes_per_sec": 0, 00:11:27.234 "w_mbytes_per_sec": 0 00:11:27.234 }, 00:11:27.234 "claimed": true, 00:11:27.234 "claim_type": "exclusive_write", 00:11:27.234 "zoned": false, 00:11:27.234 "supported_io_types": { 00:11:27.234 "read": true, 00:11:27.234 "write": true, 00:11:27.234 "unmap": true, 00:11:27.234 "flush": true, 00:11:27.234 "reset": true, 00:11:27.234 "nvme_admin": false, 00:11:27.234 "nvme_io": false, 00:11:27.234 "nvme_io_md": false, 00:11:27.234 "write_zeroes": true, 00:11:27.234 "zcopy": true, 00:11:27.234 "get_zone_info": false, 00:11:27.234 "zone_management": false, 00:11:27.234 "zone_append": false, 00:11:27.234 "compare": false, 00:11:27.234 "compare_and_write": false, 00:11:27.234 "abort": true, 00:11:27.234 "seek_hole": false, 00:11:27.234 "seek_data": false, 00:11:27.234 "copy": true, 00:11:27.234 "nvme_iov_md": false 00:11:27.234 }, 00:11:27.234 "memory_domains": [ 00:11:27.234 { 00:11:27.234 "dma_device_id": "system", 00:11:27.234 "dma_device_type": 1 00:11:27.234 }, 00:11:27.234 { 00:11:27.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.234 "dma_device_type": 2 00:11:27.234 } 00:11:27.234 ], 00:11:27.234 "driver_specific": {} 00:11:27.234 } 00:11:27.234 ] 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.234 "name": "Existed_Raid", 00:11:27.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.234 "strip_size_kb": 64, 00:11:27.234 "state": "configuring", 00:11:27.234 "raid_level": "concat", 00:11:27.234 "superblock": false, 00:11:27.234 "num_base_bdevs": 4, 00:11:27.234 "num_base_bdevs_discovered": 3, 00:11:27.234 "num_base_bdevs_operational": 4, 00:11:27.234 "base_bdevs_list": [ 00:11:27.234 { 00:11:27.234 "name": "BaseBdev1", 00:11:27.234 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:27.234 "is_configured": true, 00:11:27.234 "data_offset": 0, 00:11:27.234 "data_size": 65536 00:11:27.234 }, 00:11:27.234 { 00:11:27.234 "name": null, 00:11:27.234 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:27.234 "is_configured": false, 00:11:27.234 "data_offset": 0, 00:11:27.234 "data_size": 65536 00:11:27.234 }, 00:11:27.234 { 00:11:27.234 "name": "BaseBdev3", 00:11:27.234 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:27.234 "is_configured": true, 00:11:27.234 "data_offset": 0, 00:11:27.234 "data_size": 65536 00:11:27.234 }, 00:11:27.234 { 00:11:27.234 "name": "BaseBdev4", 00:11:27.234 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:27.234 "is_configured": true, 00:11:27.234 "data_offset": 0, 00:11:27.234 "data_size": 65536 00:11:27.234 } 00:11:27.234 ] 00:11:27.234 }' 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.234 16:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.803 [2024-09-28 16:12:42.354451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.803 "name": "Existed_Raid", 00:11:27.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.803 "strip_size_kb": 64, 00:11:27.803 "state": "configuring", 00:11:27.803 "raid_level": "concat", 00:11:27.803 "superblock": false, 00:11:27.803 "num_base_bdevs": 4, 00:11:27.803 "num_base_bdevs_discovered": 2, 00:11:27.803 "num_base_bdevs_operational": 4, 00:11:27.803 "base_bdevs_list": [ 00:11:27.803 { 00:11:27.803 "name": "BaseBdev1", 00:11:27.803 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:27.803 "is_configured": true, 00:11:27.803 "data_offset": 0, 00:11:27.803 "data_size": 65536 00:11:27.803 }, 00:11:27.803 { 00:11:27.803 "name": null, 00:11:27.803 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:27.803 "is_configured": false, 00:11:27.803 "data_offset": 0, 00:11:27.803 "data_size": 65536 00:11:27.803 }, 00:11:27.803 { 00:11:27.803 "name": null, 00:11:27.803 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:27.803 "is_configured": false, 00:11:27.803 "data_offset": 0, 00:11:27.803 "data_size": 65536 00:11:27.803 }, 00:11:27.803 { 00:11:27.803 "name": "BaseBdev4", 00:11:27.803 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:27.803 "is_configured": true, 00:11:27.803 "data_offset": 0, 00:11:27.803 "data_size": 65536 00:11:27.803 } 00:11:27.803 ] 00:11:27.803 }' 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.803 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.371 [2024-09-28 16:12:42.893565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.371 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.371 "name": "Existed_Raid", 00:11:28.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.371 "strip_size_kb": 64, 00:11:28.371 "state": "configuring", 00:11:28.371 "raid_level": "concat", 00:11:28.371 "superblock": false, 00:11:28.371 "num_base_bdevs": 4, 00:11:28.371 "num_base_bdevs_discovered": 3, 00:11:28.371 "num_base_bdevs_operational": 4, 00:11:28.371 "base_bdevs_list": [ 00:11:28.371 { 00:11:28.371 "name": "BaseBdev1", 00:11:28.371 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:28.371 "is_configured": true, 00:11:28.371 "data_offset": 0, 00:11:28.371 "data_size": 65536 00:11:28.371 }, 00:11:28.371 { 00:11:28.371 "name": null, 00:11:28.371 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:28.371 "is_configured": false, 00:11:28.371 "data_offset": 0, 00:11:28.371 "data_size": 65536 00:11:28.371 }, 00:11:28.371 { 00:11:28.371 "name": "BaseBdev3", 00:11:28.371 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:28.371 "is_configured": true, 00:11:28.371 "data_offset": 0, 00:11:28.371 "data_size": 65536 00:11:28.371 }, 00:11:28.371 { 00:11:28.371 "name": "BaseBdev4", 00:11:28.371 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:28.371 "is_configured": true, 00:11:28.372 "data_offset": 0, 00:11:28.372 "data_size": 65536 00:11:28.372 } 00:11:28.372 ] 00:11:28.372 }' 00:11:28.372 16:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.372 16:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.940 [2024-09-28 16:12:43.392707] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.940 "name": "Existed_Raid", 00:11:28.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.940 "strip_size_kb": 64, 00:11:28.940 "state": "configuring", 00:11:28.940 "raid_level": "concat", 00:11:28.940 "superblock": false, 00:11:28.940 "num_base_bdevs": 4, 00:11:28.940 "num_base_bdevs_discovered": 2, 00:11:28.940 "num_base_bdevs_operational": 4, 00:11:28.940 "base_bdevs_list": [ 00:11:28.940 { 00:11:28.940 "name": null, 00:11:28.940 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:28.940 "is_configured": false, 00:11:28.940 "data_offset": 0, 00:11:28.940 "data_size": 65536 00:11:28.940 }, 00:11:28.940 { 00:11:28.940 "name": null, 00:11:28.940 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:28.940 "is_configured": false, 00:11:28.940 "data_offset": 0, 00:11:28.940 "data_size": 65536 00:11:28.940 }, 00:11:28.940 { 00:11:28.940 "name": "BaseBdev3", 00:11:28.940 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:28.940 "is_configured": true, 00:11:28.940 "data_offset": 0, 00:11:28.940 "data_size": 65536 00:11:28.940 }, 00:11:28.940 { 00:11:28.940 "name": "BaseBdev4", 00:11:28.940 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:28.940 "is_configured": true, 00:11:28.940 "data_offset": 0, 00:11:28.940 "data_size": 65536 00:11:28.940 } 00:11:28.940 ] 00:11:28.940 }' 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.940 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.509 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.509 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.509 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.509 16:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.509 16:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.509 [2024-09-28 16:12:44.029481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.509 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.509 "name": "Existed_Raid", 00:11:29.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.509 "strip_size_kb": 64, 00:11:29.509 "state": "configuring", 00:11:29.509 "raid_level": "concat", 00:11:29.509 "superblock": false, 00:11:29.509 "num_base_bdevs": 4, 00:11:29.509 "num_base_bdevs_discovered": 3, 00:11:29.509 "num_base_bdevs_operational": 4, 00:11:29.509 "base_bdevs_list": [ 00:11:29.509 { 00:11:29.509 "name": null, 00:11:29.509 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:29.509 "is_configured": false, 00:11:29.509 "data_offset": 0, 00:11:29.509 "data_size": 65536 00:11:29.509 }, 00:11:29.509 { 00:11:29.509 "name": "BaseBdev2", 00:11:29.509 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:29.509 "is_configured": true, 00:11:29.509 "data_offset": 0, 00:11:29.509 "data_size": 65536 00:11:29.509 }, 00:11:29.509 { 00:11:29.509 "name": "BaseBdev3", 00:11:29.509 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:29.509 "is_configured": true, 00:11:29.509 "data_offset": 0, 00:11:29.509 "data_size": 65536 00:11:29.509 }, 00:11:29.509 { 00:11:29.509 "name": "BaseBdev4", 00:11:29.509 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:29.509 "is_configured": true, 00:11:29.509 "data_offset": 0, 00:11:29.510 "data_size": 65536 00:11:29.510 } 00:11:29.510 ] 00:11:29.510 }' 00:11:29.510 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.510 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.768 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.768 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.768 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.768 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.768 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 72124bb3-3c4c-4a17-8e88-c35d11fb5b88 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.026 [2024-09-28 16:12:44.555295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.026 [2024-09-28 16:12:44.555352] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.026 [2024-09-28 16:12:44.555360] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:30.026 [2024-09-28 16:12:44.555665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.026 [2024-09-28 16:12:44.555839] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.026 [2024-09-28 16:12:44.555852] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.026 [2024-09-28 16:12:44.556121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.026 NewBaseBdev 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.026 [ 00:11:30.026 { 00:11:30.026 "name": "NewBaseBdev", 00:11:30.026 "aliases": [ 00:11:30.026 "72124bb3-3c4c-4a17-8e88-c35d11fb5b88" 00:11:30.026 ], 00:11:30.026 "product_name": "Malloc disk", 00:11:30.026 "block_size": 512, 00:11:30.026 "num_blocks": 65536, 00:11:30.026 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:30.026 "assigned_rate_limits": { 00:11:30.026 "rw_ios_per_sec": 0, 00:11:30.026 "rw_mbytes_per_sec": 0, 00:11:30.026 "r_mbytes_per_sec": 0, 00:11:30.026 "w_mbytes_per_sec": 0 00:11:30.026 }, 00:11:30.026 "claimed": true, 00:11:30.026 "claim_type": "exclusive_write", 00:11:30.026 "zoned": false, 00:11:30.026 "supported_io_types": { 00:11:30.026 "read": true, 00:11:30.026 "write": true, 00:11:30.026 "unmap": true, 00:11:30.026 "flush": true, 00:11:30.026 "reset": true, 00:11:30.026 "nvme_admin": false, 00:11:30.026 "nvme_io": false, 00:11:30.026 "nvme_io_md": false, 00:11:30.026 "write_zeroes": true, 00:11:30.026 "zcopy": true, 00:11:30.026 "get_zone_info": false, 00:11:30.026 "zone_management": false, 00:11:30.026 "zone_append": false, 00:11:30.026 "compare": false, 00:11:30.026 "compare_and_write": false, 00:11:30.026 "abort": true, 00:11:30.026 "seek_hole": false, 00:11:30.026 "seek_data": false, 00:11:30.026 "copy": true, 00:11:30.026 "nvme_iov_md": false 00:11:30.026 }, 00:11:30.026 "memory_domains": [ 00:11:30.026 { 00:11:30.026 "dma_device_id": "system", 00:11:30.026 "dma_device_type": 1 00:11:30.026 }, 00:11:30.026 { 00:11:30.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.026 "dma_device_type": 2 00:11:30.026 } 00:11:30.026 ], 00:11:30.026 "driver_specific": {} 00:11:30.026 } 00:11:30.026 ] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.026 "name": "Existed_Raid", 00:11:30.026 "uuid": "29643b5c-98b3-45b9-bde7-f9ffe764dab9", 00:11:30.026 "strip_size_kb": 64, 00:11:30.026 "state": "online", 00:11:30.026 "raid_level": "concat", 00:11:30.026 "superblock": false, 00:11:30.026 "num_base_bdevs": 4, 00:11:30.026 "num_base_bdevs_discovered": 4, 00:11:30.026 "num_base_bdevs_operational": 4, 00:11:30.026 "base_bdevs_list": [ 00:11:30.026 { 00:11:30.026 "name": "NewBaseBdev", 00:11:30.026 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:30.026 "is_configured": true, 00:11:30.026 "data_offset": 0, 00:11:30.026 "data_size": 65536 00:11:30.026 }, 00:11:30.026 { 00:11:30.026 "name": "BaseBdev2", 00:11:30.026 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:30.026 "is_configured": true, 00:11:30.026 "data_offset": 0, 00:11:30.026 "data_size": 65536 00:11:30.026 }, 00:11:30.026 { 00:11:30.026 "name": "BaseBdev3", 00:11:30.026 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:30.026 "is_configured": true, 00:11:30.026 "data_offset": 0, 00:11:30.026 "data_size": 65536 00:11:30.026 }, 00:11:30.026 { 00:11:30.026 "name": "BaseBdev4", 00:11:30.026 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:30.026 "is_configured": true, 00:11:30.026 "data_offset": 0, 00:11:30.026 "data_size": 65536 00:11:30.026 } 00:11:30.026 ] 00:11:30.026 }' 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.026 16:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.594 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.594 16:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.594 [2024-09-28 16:12:45.014928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.594 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.595 "name": "Existed_Raid", 00:11:30.595 "aliases": [ 00:11:30.595 "29643b5c-98b3-45b9-bde7-f9ffe764dab9" 00:11:30.595 ], 00:11:30.595 "product_name": "Raid Volume", 00:11:30.595 "block_size": 512, 00:11:30.595 "num_blocks": 262144, 00:11:30.595 "uuid": "29643b5c-98b3-45b9-bde7-f9ffe764dab9", 00:11:30.595 "assigned_rate_limits": { 00:11:30.595 "rw_ios_per_sec": 0, 00:11:30.595 "rw_mbytes_per_sec": 0, 00:11:30.595 "r_mbytes_per_sec": 0, 00:11:30.595 "w_mbytes_per_sec": 0 00:11:30.595 }, 00:11:30.595 "claimed": false, 00:11:30.595 "zoned": false, 00:11:30.595 "supported_io_types": { 00:11:30.595 "read": true, 00:11:30.595 "write": true, 00:11:30.595 "unmap": true, 00:11:30.595 "flush": true, 00:11:30.595 "reset": true, 00:11:30.595 "nvme_admin": false, 00:11:30.595 "nvme_io": false, 00:11:30.595 "nvme_io_md": false, 00:11:30.595 "write_zeroes": true, 00:11:30.595 "zcopy": false, 00:11:30.595 "get_zone_info": false, 00:11:30.595 "zone_management": false, 00:11:30.595 "zone_append": false, 00:11:30.595 "compare": false, 00:11:30.595 "compare_and_write": false, 00:11:30.595 "abort": false, 00:11:30.595 "seek_hole": false, 00:11:30.595 "seek_data": false, 00:11:30.595 "copy": false, 00:11:30.595 "nvme_iov_md": false 00:11:30.595 }, 00:11:30.595 "memory_domains": [ 00:11:30.595 { 00:11:30.595 "dma_device_id": "system", 00:11:30.595 "dma_device_type": 1 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.595 "dma_device_type": 2 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "dma_device_id": "system", 00:11:30.595 "dma_device_type": 1 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.595 "dma_device_type": 2 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "dma_device_id": "system", 00:11:30.595 "dma_device_type": 1 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.595 "dma_device_type": 2 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "dma_device_id": "system", 00:11:30.595 "dma_device_type": 1 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.595 "dma_device_type": 2 00:11:30.595 } 00:11:30.595 ], 00:11:30.595 "driver_specific": { 00:11:30.595 "raid": { 00:11:30.595 "uuid": "29643b5c-98b3-45b9-bde7-f9ffe764dab9", 00:11:30.595 "strip_size_kb": 64, 00:11:30.595 "state": "online", 00:11:30.595 "raid_level": "concat", 00:11:30.595 "superblock": false, 00:11:30.595 "num_base_bdevs": 4, 00:11:30.595 "num_base_bdevs_discovered": 4, 00:11:30.595 "num_base_bdevs_operational": 4, 00:11:30.595 "base_bdevs_list": [ 00:11:30.595 { 00:11:30.595 "name": "NewBaseBdev", 00:11:30.595 "uuid": "72124bb3-3c4c-4a17-8e88-c35d11fb5b88", 00:11:30.595 "is_configured": true, 00:11:30.595 "data_offset": 0, 00:11:30.595 "data_size": 65536 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "name": "BaseBdev2", 00:11:30.595 "uuid": "3a63d1fe-1441-4e32-b369-273c619667aa", 00:11:30.595 "is_configured": true, 00:11:30.595 "data_offset": 0, 00:11:30.595 "data_size": 65536 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "name": "BaseBdev3", 00:11:30.595 "uuid": "45ace435-cb03-4071-8ad2-737b06d500e2", 00:11:30.595 "is_configured": true, 00:11:30.595 "data_offset": 0, 00:11:30.595 "data_size": 65536 00:11:30.595 }, 00:11:30.595 { 00:11:30.595 "name": "BaseBdev4", 00:11:30.595 "uuid": "16d068fa-9e0e-4d70-a60b-cb199abd380d", 00:11:30.595 "is_configured": true, 00:11:30.595 "data_offset": 0, 00:11:30.595 "data_size": 65536 00:11:30.595 } 00:11:30.595 ] 00:11:30.595 } 00:11:30.595 } 00:11:30.595 }' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.595 BaseBdev2 00:11:30.595 BaseBdev3 00:11:30.595 BaseBdev4' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.595 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.855 [2024-09-28 16:12:45.302041] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.855 [2024-09-28 16:12:45.302112] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.855 [2024-09-28 16:12:45.302226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.855 [2024-09-28 16:12:45.302328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.855 [2024-09-28 16:12:45.302368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71314 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71314 ']' 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71314 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71314 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.855 killing process with pid 71314 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71314' 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71314 00:11:30.855 [2024-09-28 16:12:45.352996] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.855 16:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71314 00:11:31.115 [2024-09-28 16:12:45.771381] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.497 16:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.497 00:11:32.497 real 0m11.891s 00:11:32.497 user 0m18.411s 00:11:32.497 sys 0m2.331s 00:11:32.497 ************************************ 00:11:32.497 END TEST raid_state_function_test 00:11:32.497 ************************************ 00:11:32.497 16:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.497 16:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.497 16:12:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:32.497 16:12:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:32.497 16:12:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.497 16:12:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.756 ************************************ 00:11:32.756 START TEST raid_state_function_test_sb 00:11:32.756 ************************************ 00:11:32.756 16:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:32.756 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:32.756 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:32.757 Process raid pid: 71992 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71992 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71992' 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71992 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71992 ']' 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.757 16:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.757 [2024-09-28 16:12:47.288533] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:32.757 [2024-09-28 16:12:47.288720] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.016 [2024-09-28 16:12:47.452802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.016 [2024-09-28 16:12:47.694739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.276 [2024-09-28 16:12:47.927103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.276 [2024-09-28 16:12:47.927236] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.536 [2024-09-28 16:12:48.112154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.536 [2024-09-28 16:12:48.112311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.536 [2024-09-28 16:12:48.112342] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.536 [2024-09-28 16:12:48.112367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.536 [2024-09-28 16:12:48.112385] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.536 [2024-09-28 16:12:48.112408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.536 [2024-09-28 16:12:48.112441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.536 [2024-09-28 16:12:48.112463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.536 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.537 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.537 "name": "Existed_Raid", 00:11:33.537 "uuid": "dba17f9e-7832-49f0-8283-3ca7314ada80", 00:11:33.537 "strip_size_kb": 64, 00:11:33.537 "state": "configuring", 00:11:33.537 "raid_level": "concat", 00:11:33.537 "superblock": true, 00:11:33.537 "num_base_bdevs": 4, 00:11:33.537 "num_base_bdevs_discovered": 0, 00:11:33.537 "num_base_bdevs_operational": 4, 00:11:33.537 "base_bdevs_list": [ 00:11:33.537 { 00:11:33.537 "name": "BaseBdev1", 00:11:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.537 "is_configured": false, 00:11:33.537 "data_offset": 0, 00:11:33.537 "data_size": 0 00:11:33.537 }, 00:11:33.537 { 00:11:33.537 "name": "BaseBdev2", 00:11:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.537 "is_configured": false, 00:11:33.537 "data_offset": 0, 00:11:33.537 "data_size": 0 00:11:33.537 }, 00:11:33.537 { 00:11:33.537 "name": "BaseBdev3", 00:11:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.537 "is_configured": false, 00:11:33.537 "data_offset": 0, 00:11:33.537 "data_size": 0 00:11:33.537 }, 00:11:33.537 { 00:11:33.537 "name": "BaseBdev4", 00:11:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.537 "is_configured": false, 00:11:33.537 "data_offset": 0, 00:11:33.537 "data_size": 0 00:11:33.537 } 00:11:33.537 ] 00:11:33.537 }' 00:11:33.537 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.537 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.152 [2024-09-28 16:12:48.519385] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.152 [2024-09-28 16:12:48.519493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.152 [2024-09-28 16:12:48.531396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.152 [2024-09-28 16:12:48.531440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.152 [2024-09-28 16:12:48.531450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.152 [2024-09-28 16:12:48.531460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.152 [2024-09-28 16:12:48.531466] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.152 [2024-09-28 16:12:48.531475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.152 [2024-09-28 16:12:48.531481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.152 [2024-09-28 16:12:48.531491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.152 [2024-09-28 16:12:48.612815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.152 BaseBdev1 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.152 [ 00:11:34.152 { 00:11:34.152 "name": "BaseBdev1", 00:11:34.152 "aliases": [ 00:11:34.152 "055fae49-3868-4705-a4d3-973ab46eae87" 00:11:34.152 ], 00:11:34.152 "product_name": "Malloc disk", 00:11:34.152 "block_size": 512, 00:11:34.152 "num_blocks": 65536, 00:11:34.152 "uuid": "055fae49-3868-4705-a4d3-973ab46eae87", 00:11:34.152 "assigned_rate_limits": { 00:11:34.152 "rw_ios_per_sec": 0, 00:11:34.152 "rw_mbytes_per_sec": 0, 00:11:34.152 "r_mbytes_per_sec": 0, 00:11:34.152 "w_mbytes_per_sec": 0 00:11:34.152 }, 00:11:34.152 "claimed": true, 00:11:34.152 "claim_type": "exclusive_write", 00:11:34.152 "zoned": false, 00:11:34.152 "supported_io_types": { 00:11:34.152 "read": true, 00:11:34.152 "write": true, 00:11:34.152 "unmap": true, 00:11:34.152 "flush": true, 00:11:34.152 "reset": true, 00:11:34.152 "nvme_admin": false, 00:11:34.152 "nvme_io": false, 00:11:34.152 "nvme_io_md": false, 00:11:34.152 "write_zeroes": true, 00:11:34.152 "zcopy": true, 00:11:34.152 "get_zone_info": false, 00:11:34.152 "zone_management": false, 00:11:34.152 "zone_append": false, 00:11:34.152 "compare": false, 00:11:34.152 "compare_and_write": false, 00:11:34.152 "abort": true, 00:11:34.152 "seek_hole": false, 00:11:34.152 "seek_data": false, 00:11:34.152 "copy": true, 00:11:34.152 "nvme_iov_md": false 00:11:34.152 }, 00:11:34.152 "memory_domains": [ 00:11:34.152 { 00:11:34.152 "dma_device_id": "system", 00:11:34.152 "dma_device_type": 1 00:11:34.152 }, 00:11:34.152 { 00:11:34.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.152 "dma_device_type": 2 00:11:34.152 } 00:11:34.152 ], 00:11:34.152 "driver_specific": {} 00:11:34.152 } 00:11:34.152 ] 00:11:34.152 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.153 "name": "Existed_Raid", 00:11:34.153 "uuid": "f8870350-dff7-4343-9dee-e4d03b3a03be", 00:11:34.153 "strip_size_kb": 64, 00:11:34.153 "state": "configuring", 00:11:34.153 "raid_level": "concat", 00:11:34.153 "superblock": true, 00:11:34.153 "num_base_bdevs": 4, 00:11:34.153 "num_base_bdevs_discovered": 1, 00:11:34.153 "num_base_bdevs_operational": 4, 00:11:34.153 "base_bdevs_list": [ 00:11:34.153 { 00:11:34.153 "name": "BaseBdev1", 00:11:34.153 "uuid": "055fae49-3868-4705-a4d3-973ab46eae87", 00:11:34.153 "is_configured": true, 00:11:34.153 "data_offset": 2048, 00:11:34.153 "data_size": 63488 00:11:34.153 }, 00:11:34.153 { 00:11:34.153 "name": "BaseBdev2", 00:11:34.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.153 "is_configured": false, 00:11:34.153 "data_offset": 0, 00:11:34.153 "data_size": 0 00:11:34.153 }, 00:11:34.153 { 00:11:34.153 "name": "BaseBdev3", 00:11:34.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.153 "is_configured": false, 00:11:34.153 "data_offset": 0, 00:11:34.153 "data_size": 0 00:11:34.153 }, 00:11:34.153 { 00:11:34.153 "name": "BaseBdev4", 00:11:34.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.153 "is_configured": false, 00:11:34.153 "data_offset": 0, 00:11:34.153 "data_size": 0 00:11:34.153 } 00:11:34.153 ] 00:11:34.153 }' 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.153 16:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.439 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.439 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.439 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.439 [2024-09-28 16:12:49.104014] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.439 [2024-09-28 16:12:49.104069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.439 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.439 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.439 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.439 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.439 [2024-09-28 16:12:49.116046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.439 [2024-09-28 16:12:49.118081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.439 [2024-09-28 16:12:49.118119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.439 [2024-09-28 16:12:49.118128] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.439 [2024-09-28 16:12:49.118138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.439 [2024-09-28 16:12:49.118143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.439 [2024-09-28 16:12:49.118152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.699 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.700 "name": "Existed_Raid", 00:11:34.700 "uuid": "86e799a0-af73-4331-a4c3-6bc5068f452c", 00:11:34.700 "strip_size_kb": 64, 00:11:34.700 "state": "configuring", 00:11:34.700 "raid_level": "concat", 00:11:34.700 "superblock": true, 00:11:34.700 "num_base_bdevs": 4, 00:11:34.700 "num_base_bdevs_discovered": 1, 00:11:34.700 "num_base_bdevs_operational": 4, 00:11:34.700 "base_bdevs_list": [ 00:11:34.700 { 00:11:34.700 "name": "BaseBdev1", 00:11:34.700 "uuid": "055fae49-3868-4705-a4d3-973ab46eae87", 00:11:34.700 "is_configured": true, 00:11:34.700 "data_offset": 2048, 00:11:34.700 "data_size": 63488 00:11:34.700 }, 00:11:34.700 { 00:11:34.700 "name": "BaseBdev2", 00:11:34.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.700 "is_configured": false, 00:11:34.700 "data_offset": 0, 00:11:34.700 "data_size": 0 00:11:34.700 }, 00:11:34.700 { 00:11:34.700 "name": "BaseBdev3", 00:11:34.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.700 "is_configured": false, 00:11:34.700 "data_offset": 0, 00:11:34.700 "data_size": 0 00:11:34.700 }, 00:11:34.700 { 00:11:34.700 "name": "BaseBdev4", 00:11:34.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.700 "is_configured": false, 00:11:34.700 "data_offset": 0, 00:11:34.700 "data_size": 0 00:11:34.700 } 00:11:34.700 ] 00:11:34.700 }' 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.700 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.960 [2024-09-28 16:12:49.624097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.960 BaseBdev2 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.960 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.220 [ 00:11:35.220 { 00:11:35.220 "name": "BaseBdev2", 00:11:35.220 "aliases": [ 00:11:35.220 "f62167eb-ca18-407b-852c-bf322076178e" 00:11:35.220 ], 00:11:35.220 "product_name": "Malloc disk", 00:11:35.220 "block_size": 512, 00:11:35.220 "num_blocks": 65536, 00:11:35.220 "uuid": "f62167eb-ca18-407b-852c-bf322076178e", 00:11:35.220 "assigned_rate_limits": { 00:11:35.220 "rw_ios_per_sec": 0, 00:11:35.220 "rw_mbytes_per_sec": 0, 00:11:35.220 "r_mbytes_per_sec": 0, 00:11:35.220 "w_mbytes_per_sec": 0 00:11:35.220 }, 00:11:35.220 "claimed": true, 00:11:35.220 "claim_type": "exclusive_write", 00:11:35.220 "zoned": false, 00:11:35.220 "supported_io_types": { 00:11:35.220 "read": true, 00:11:35.220 "write": true, 00:11:35.220 "unmap": true, 00:11:35.220 "flush": true, 00:11:35.220 "reset": true, 00:11:35.220 "nvme_admin": false, 00:11:35.220 "nvme_io": false, 00:11:35.220 "nvme_io_md": false, 00:11:35.220 "write_zeroes": true, 00:11:35.220 "zcopy": true, 00:11:35.220 "get_zone_info": false, 00:11:35.220 "zone_management": false, 00:11:35.220 "zone_append": false, 00:11:35.220 "compare": false, 00:11:35.220 "compare_and_write": false, 00:11:35.220 "abort": true, 00:11:35.220 "seek_hole": false, 00:11:35.220 "seek_data": false, 00:11:35.220 "copy": true, 00:11:35.220 "nvme_iov_md": false 00:11:35.220 }, 00:11:35.220 "memory_domains": [ 00:11:35.220 { 00:11:35.220 "dma_device_id": "system", 00:11:35.220 "dma_device_type": 1 00:11:35.220 }, 00:11:35.220 { 00:11:35.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.220 "dma_device_type": 2 00:11:35.220 } 00:11:35.220 ], 00:11:35.220 "driver_specific": {} 00:11:35.220 } 00:11:35.220 ] 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.220 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.221 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.221 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.221 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.221 "name": "Existed_Raid", 00:11:35.221 "uuid": "86e799a0-af73-4331-a4c3-6bc5068f452c", 00:11:35.221 "strip_size_kb": 64, 00:11:35.221 "state": "configuring", 00:11:35.221 "raid_level": "concat", 00:11:35.221 "superblock": true, 00:11:35.221 "num_base_bdevs": 4, 00:11:35.221 "num_base_bdevs_discovered": 2, 00:11:35.221 "num_base_bdevs_operational": 4, 00:11:35.221 "base_bdevs_list": [ 00:11:35.221 { 00:11:35.221 "name": "BaseBdev1", 00:11:35.221 "uuid": "055fae49-3868-4705-a4d3-973ab46eae87", 00:11:35.221 "is_configured": true, 00:11:35.221 "data_offset": 2048, 00:11:35.221 "data_size": 63488 00:11:35.221 }, 00:11:35.221 { 00:11:35.221 "name": "BaseBdev2", 00:11:35.221 "uuid": "f62167eb-ca18-407b-852c-bf322076178e", 00:11:35.221 "is_configured": true, 00:11:35.221 "data_offset": 2048, 00:11:35.221 "data_size": 63488 00:11:35.221 }, 00:11:35.221 { 00:11:35.221 "name": "BaseBdev3", 00:11:35.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.221 "is_configured": false, 00:11:35.221 "data_offset": 0, 00:11:35.221 "data_size": 0 00:11:35.221 }, 00:11:35.221 { 00:11:35.221 "name": "BaseBdev4", 00:11:35.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.221 "is_configured": false, 00:11:35.221 "data_offset": 0, 00:11:35.221 "data_size": 0 00:11:35.221 } 00:11:35.221 ] 00:11:35.221 }' 00:11:35.221 16:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.221 16:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.481 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.481 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.481 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.741 [2024-09-28 16:12:50.172176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.741 BaseBdev3 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.741 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.741 [ 00:11:35.741 { 00:11:35.741 "name": "BaseBdev3", 00:11:35.741 "aliases": [ 00:11:35.741 "da6400ab-b5d5-4e5d-bb9f-326f323579b5" 00:11:35.741 ], 00:11:35.741 "product_name": "Malloc disk", 00:11:35.741 "block_size": 512, 00:11:35.741 "num_blocks": 65536, 00:11:35.741 "uuid": "da6400ab-b5d5-4e5d-bb9f-326f323579b5", 00:11:35.741 "assigned_rate_limits": { 00:11:35.741 "rw_ios_per_sec": 0, 00:11:35.741 "rw_mbytes_per_sec": 0, 00:11:35.741 "r_mbytes_per_sec": 0, 00:11:35.741 "w_mbytes_per_sec": 0 00:11:35.741 }, 00:11:35.741 "claimed": true, 00:11:35.741 "claim_type": "exclusive_write", 00:11:35.741 "zoned": false, 00:11:35.741 "supported_io_types": { 00:11:35.741 "read": true, 00:11:35.741 "write": true, 00:11:35.741 "unmap": true, 00:11:35.741 "flush": true, 00:11:35.741 "reset": true, 00:11:35.741 "nvme_admin": false, 00:11:35.741 "nvme_io": false, 00:11:35.741 "nvme_io_md": false, 00:11:35.741 "write_zeroes": true, 00:11:35.741 "zcopy": true, 00:11:35.741 "get_zone_info": false, 00:11:35.741 "zone_management": false, 00:11:35.741 "zone_append": false, 00:11:35.741 "compare": false, 00:11:35.741 "compare_and_write": false, 00:11:35.741 "abort": true, 00:11:35.741 "seek_hole": false, 00:11:35.741 "seek_data": false, 00:11:35.741 "copy": true, 00:11:35.741 "nvme_iov_md": false 00:11:35.741 }, 00:11:35.741 "memory_domains": [ 00:11:35.741 { 00:11:35.741 "dma_device_id": "system", 00:11:35.741 "dma_device_type": 1 00:11:35.741 }, 00:11:35.741 { 00:11:35.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.741 "dma_device_type": 2 00:11:35.741 } 00:11:35.741 ], 00:11:35.741 "driver_specific": {} 00:11:35.741 } 00:11:35.742 ] 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.742 "name": "Existed_Raid", 00:11:35.742 "uuid": "86e799a0-af73-4331-a4c3-6bc5068f452c", 00:11:35.742 "strip_size_kb": 64, 00:11:35.742 "state": "configuring", 00:11:35.742 "raid_level": "concat", 00:11:35.742 "superblock": true, 00:11:35.742 "num_base_bdevs": 4, 00:11:35.742 "num_base_bdevs_discovered": 3, 00:11:35.742 "num_base_bdevs_operational": 4, 00:11:35.742 "base_bdevs_list": [ 00:11:35.742 { 00:11:35.742 "name": "BaseBdev1", 00:11:35.742 "uuid": "055fae49-3868-4705-a4d3-973ab46eae87", 00:11:35.742 "is_configured": true, 00:11:35.742 "data_offset": 2048, 00:11:35.742 "data_size": 63488 00:11:35.742 }, 00:11:35.742 { 00:11:35.742 "name": "BaseBdev2", 00:11:35.742 "uuid": "f62167eb-ca18-407b-852c-bf322076178e", 00:11:35.742 "is_configured": true, 00:11:35.742 "data_offset": 2048, 00:11:35.742 "data_size": 63488 00:11:35.742 }, 00:11:35.742 { 00:11:35.742 "name": "BaseBdev3", 00:11:35.742 "uuid": "da6400ab-b5d5-4e5d-bb9f-326f323579b5", 00:11:35.742 "is_configured": true, 00:11:35.742 "data_offset": 2048, 00:11:35.742 "data_size": 63488 00:11:35.742 }, 00:11:35.742 { 00:11:35.742 "name": "BaseBdev4", 00:11:35.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.742 "is_configured": false, 00:11:35.742 "data_offset": 0, 00:11:35.742 "data_size": 0 00:11:35.742 } 00:11:35.742 ] 00:11:35.742 }' 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.742 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.002 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:36.002 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.002 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.262 [2024-09-28 16:12:50.686965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.262 [2024-09-28 16:12:50.687289] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.262 [2024-09-28 16:12:50.687313] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.262 [2024-09-28 16:12:50.687621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.262 BaseBdev4 00:11:36.262 [2024-09-28 16:12:50.687783] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.262 [2024-09-28 16:12:50.687821] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.262 [2024-09-28 16:12:50.687977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.262 [ 00:11:36.262 { 00:11:36.262 "name": "BaseBdev4", 00:11:36.262 "aliases": [ 00:11:36.262 "cf011532-5004-402b-9818-09dc9f4d6726" 00:11:36.262 ], 00:11:36.262 "product_name": "Malloc disk", 00:11:36.262 "block_size": 512, 00:11:36.262 "num_blocks": 65536, 00:11:36.262 "uuid": "cf011532-5004-402b-9818-09dc9f4d6726", 00:11:36.262 "assigned_rate_limits": { 00:11:36.262 "rw_ios_per_sec": 0, 00:11:36.262 "rw_mbytes_per_sec": 0, 00:11:36.262 "r_mbytes_per_sec": 0, 00:11:36.262 "w_mbytes_per_sec": 0 00:11:36.262 }, 00:11:36.262 "claimed": true, 00:11:36.262 "claim_type": "exclusive_write", 00:11:36.262 "zoned": false, 00:11:36.262 "supported_io_types": { 00:11:36.262 "read": true, 00:11:36.262 "write": true, 00:11:36.262 "unmap": true, 00:11:36.262 "flush": true, 00:11:36.262 "reset": true, 00:11:36.262 "nvme_admin": false, 00:11:36.262 "nvme_io": false, 00:11:36.262 "nvme_io_md": false, 00:11:36.262 "write_zeroes": true, 00:11:36.262 "zcopy": true, 00:11:36.262 "get_zone_info": false, 00:11:36.262 "zone_management": false, 00:11:36.262 "zone_append": false, 00:11:36.262 "compare": false, 00:11:36.262 "compare_and_write": false, 00:11:36.262 "abort": true, 00:11:36.262 "seek_hole": false, 00:11:36.262 "seek_data": false, 00:11:36.262 "copy": true, 00:11:36.262 "nvme_iov_md": false 00:11:36.262 }, 00:11:36.262 "memory_domains": [ 00:11:36.262 { 00:11:36.262 "dma_device_id": "system", 00:11:36.262 "dma_device_type": 1 00:11:36.262 }, 00:11:36.262 { 00:11:36.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.262 "dma_device_type": 2 00:11:36.262 } 00:11:36.262 ], 00:11:36.262 "driver_specific": {} 00:11:36.262 } 00:11:36.262 ] 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.262 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.263 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.263 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.263 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.263 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.263 "name": "Existed_Raid", 00:11:36.263 "uuid": "86e799a0-af73-4331-a4c3-6bc5068f452c", 00:11:36.263 "strip_size_kb": 64, 00:11:36.263 "state": "online", 00:11:36.263 "raid_level": "concat", 00:11:36.263 "superblock": true, 00:11:36.263 "num_base_bdevs": 4, 00:11:36.263 "num_base_bdevs_discovered": 4, 00:11:36.263 "num_base_bdevs_operational": 4, 00:11:36.263 "base_bdevs_list": [ 00:11:36.263 { 00:11:36.263 "name": "BaseBdev1", 00:11:36.263 "uuid": "055fae49-3868-4705-a4d3-973ab46eae87", 00:11:36.263 "is_configured": true, 00:11:36.263 "data_offset": 2048, 00:11:36.263 "data_size": 63488 00:11:36.263 }, 00:11:36.263 { 00:11:36.263 "name": "BaseBdev2", 00:11:36.263 "uuid": "f62167eb-ca18-407b-852c-bf322076178e", 00:11:36.263 "is_configured": true, 00:11:36.263 "data_offset": 2048, 00:11:36.263 "data_size": 63488 00:11:36.263 }, 00:11:36.263 { 00:11:36.263 "name": "BaseBdev3", 00:11:36.263 "uuid": "da6400ab-b5d5-4e5d-bb9f-326f323579b5", 00:11:36.263 "is_configured": true, 00:11:36.263 "data_offset": 2048, 00:11:36.263 "data_size": 63488 00:11:36.263 }, 00:11:36.263 { 00:11:36.263 "name": "BaseBdev4", 00:11:36.263 "uuid": "cf011532-5004-402b-9818-09dc9f4d6726", 00:11:36.263 "is_configured": true, 00:11:36.263 "data_offset": 2048, 00:11:36.263 "data_size": 63488 00:11:36.263 } 00:11:36.263 ] 00:11:36.263 }' 00:11:36.263 16:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.263 16:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.523 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.524 [2024-09-28 16:12:51.154557] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.524 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.524 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.524 "name": "Existed_Raid", 00:11:36.524 "aliases": [ 00:11:36.524 "86e799a0-af73-4331-a4c3-6bc5068f452c" 00:11:36.524 ], 00:11:36.524 "product_name": "Raid Volume", 00:11:36.524 "block_size": 512, 00:11:36.524 "num_blocks": 253952, 00:11:36.524 "uuid": "86e799a0-af73-4331-a4c3-6bc5068f452c", 00:11:36.524 "assigned_rate_limits": { 00:11:36.524 "rw_ios_per_sec": 0, 00:11:36.524 "rw_mbytes_per_sec": 0, 00:11:36.524 "r_mbytes_per_sec": 0, 00:11:36.524 "w_mbytes_per_sec": 0 00:11:36.524 }, 00:11:36.524 "claimed": false, 00:11:36.524 "zoned": false, 00:11:36.524 "supported_io_types": { 00:11:36.524 "read": true, 00:11:36.524 "write": true, 00:11:36.524 "unmap": true, 00:11:36.524 "flush": true, 00:11:36.524 "reset": true, 00:11:36.524 "nvme_admin": false, 00:11:36.524 "nvme_io": false, 00:11:36.524 "nvme_io_md": false, 00:11:36.524 "write_zeroes": true, 00:11:36.524 "zcopy": false, 00:11:36.524 "get_zone_info": false, 00:11:36.524 "zone_management": false, 00:11:36.524 "zone_append": false, 00:11:36.524 "compare": false, 00:11:36.524 "compare_and_write": false, 00:11:36.524 "abort": false, 00:11:36.524 "seek_hole": false, 00:11:36.524 "seek_data": false, 00:11:36.524 "copy": false, 00:11:36.524 "nvme_iov_md": false 00:11:36.524 }, 00:11:36.524 "memory_domains": [ 00:11:36.524 { 00:11:36.524 "dma_device_id": "system", 00:11:36.524 "dma_device_type": 1 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.524 "dma_device_type": 2 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "dma_device_id": "system", 00:11:36.524 "dma_device_type": 1 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.524 "dma_device_type": 2 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "dma_device_id": "system", 00:11:36.524 "dma_device_type": 1 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.524 "dma_device_type": 2 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "dma_device_id": "system", 00:11:36.524 "dma_device_type": 1 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.524 "dma_device_type": 2 00:11:36.524 } 00:11:36.524 ], 00:11:36.524 "driver_specific": { 00:11:36.524 "raid": { 00:11:36.524 "uuid": "86e799a0-af73-4331-a4c3-6bc5068f452c", 00:11:36.524 "strip_size_kb": 64, 00:11:36.524 "state": "online", 00:11:36.524 "raid_level": "concat", 00:11:36.524 "superblock": true, 00:11:36.524 "num_base_bdevs": 4, 00:11:36.524 "num_base_bdevs_discovered": 4, 00:11:36.524 "num_base_bdevs_operational": 4, 00:11:36.524 "base_bdevs_list": [ 00:11:36.524 { 00:11:36.524 "name": "BaseBdev1", 00:11:36.524 "uuid": "055fae49-3868-4705-a4d3-973ab46eae87", 00:11:36.524 "is_configured": true, 00:11:36.524 "data_offset": 2048, 00:11:36.524 "data_size": 63488 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "name": "BaseBdev2", 00:11:36.524 "uuid": "f62167eb-ca18-407b-852c-bf322076178e", 00:11:36.524 "is_configured": true, 00:11:36.524 "data_offset": 2048, 00:11:36.524 "data_size": 63488 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "name": "BaseBdev3", 00:11:36.524 "uuid": "da6400ab-b5d5-4e5d-bb9f-326f323579b5", 00:11:36.524 "is_configured": true, 00:11:36.524 "data_offset": 2048, 00:11:36.524 "data_size": 63488 00:11:36.524 }, 00:11:36.524 { 00:11:36.524 "name": "BaseBdev4", 00:11:36.524 "uuid": "cf011532-5004-402b-9818-09dc9f4d6726", 00:11:36.524 "is_configured": true, 00:11:36.524 "data_offset": 2048, 00:11:36.524 "data_size": 63488 00:11:36.524 } 00:11:36.524 ] 00:11:36.524 } 00:11:36.524 } 00:11:36.524 }' 00:11:36.524 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.784 BaseBdev2 00:11:36.784 BaseBdev3 00:11:36.784 BaseBdev4' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.784 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.043 [2024-09-28 16:12:51.473665] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.043 [2024-09-28 16:12:51.473699] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.043 [2024-09-28 16:12:51.473752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.043 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.044 "name": "Existed_Raid", 00:11:37.044 "uuid": "86e799a0-af73-4331-a4c3-6bc5068f452c", 00:11:37.044 "strip_size_kb": 64, 00:11:37.044 "state": "offline", 00:11:37.044 "raid_level": "concat", 00:11:37.044 "superblock": true, 00:11:37.044 "num_base_bdevs": 4, 00:11:37.044 "num_base_bdevs_discovered": 3, 00:11:37.044 "num_base_bdevs_operational": 3, 00:11:37.044 "base_bdevs_list": [ 00:11:37.044 { 00:11:37.044 "name": null, 00:11:37.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.044 "is_configured": false, 00:11:37.044 "data_offset": 0, 00:11:37.044 "data_size": 63488 00:11:37.044 }, 00:11:37.044 { 00:11:37.044 "name": "BaseBdev2", 00:11:37.044 "uuid": "f62167eb-ca18-407b-852c-bf322076178e", 00:11:37.044 "is_configured": true, 00:11:37.044 "data_offset": 2048, 00:11:37.044 "data_size": 63488 00:11:37.044 }, 00:11:37.044 { 00:11:37.044 "name": "BaseBdev3", 00:11:37.044 "uuid": "da6400ab-b5d5-4e5d-bb9f-326f323579b5", 00:11:37.044 "is_configured": true, 00:11:37.044 "data_offset": 2048, 00:11:37.044 "data_size": 63488 00:11:37.044 }, 00:11:37.044 { 00:11:37.044 "name": "BaseBdev4", 00:11:37.044 "uuid": "cf011532-5004-402b-9818-09dc9f4d6726", 00:11:37.044 "is_configured": true, 00:11:37.044 "data_offset": 2048, 00:11:37.044 "data_size": 63488 00:11:37.044 } 00:11:37.044 ] 00:11:37.044 }' 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.044 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.612 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.612 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.612 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.612 16:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.612 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.612 16:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.612 [2024-09-28 16:12:52.026371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.612 [2024-09-28 16:12:52.181496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.612 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.873 [2024-09-28 16:12:52.341780] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.873 [2024-09-28 16:12:52.341887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.873 BaseBdev2 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.873 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.134 [ 00:11:38.134 { 00:11:38.134 "name": "BaseBdev2", 00:11:38.134 "aliases": [ 00:11:38.134 "6f086a66-64e7-490e-9b98-2cdece277b69" 00:11:38.134 ], 00:11:38.134 "product_name": "Malloc disk", 00:11:38.134 "block_size": 512, 00:11:38.134 "num_blocks": 65536, 00:11:38.134 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:38.134 "assigned_rate_limits": { 00:11:38.134 "rw_ios_per_sec": 0, 00:11:38.134 "rw_mbytes_per_sec": 0, 00:11:38.134 "r_mbytes_per_sec": 0, 00:11:38.134 "w_mbytes_per_sec": 0 00:11:38.134 }, 00:11:38.134 "claimed": false, 00:11:38.134 "zoned": false, 00:11:38.134 "supported_io_types": { 00:11:38.134 "read": true, 00:11:38.134 "write": true, 00:11:38.134 "unmap": true, 00:11:38.134 "flush": true, 00:11:38.134 "reset": true, 00:11:38.134 "nvme_admin": false, 00:11:38.135 "nvme_io": false, 00:11:38.135 "nvme_io_md": false, 00:11:38.135 "write_zeroes": true, 00:11:38.135 "zcopy": true, 00:11:38.135 "get_zone_info": false, 00:11:38.135 "zone_management": false, 00:11:38.135 "zone_append": false, 00:11:38.135 "compare": false, 00:11:38.135 "compare_and_write": false, 00:11:38.135 "abort": true, 00:11:38.135 "seek_hole": false, 00:11:38.135 "seek_data": false, 00:11:38.135 "copy": true, 00:11:38.135 "nvme_iov_md": false 00:11:38.135 }, 00:11:38.135 "memory_domains": [ 00:11:38.135 { 00:11:38.135 "dma_device_id": "system", 00:11:38.135 "dma_device_type": 1 00:11:38.135 }, 00:11:38.135 { 00:11:38.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.135 "dma_device_type": 2 00:11:38.135 } 00:11:38.135 ], 00:11:38.135 "driver_specific": {} 00:11:38.135 } 00:11:38.135 ] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.135 BaseBdev3 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.135 [ 00:11:38.135 { 00:11:38.135 "name": "BaseBdev3", 00:11:38.135 "aliases": [ 00:11:38.135 "1c0ec165-df88-401a-b046-5c2d923ac39f" 00:11:38.135 ], 00:11:38.135 "product_name": "Malloc disk", 00:11:38.135 "block_size": 512, 00:11:38.135 "num_blocks": 65536, 00:11:38.135 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:38.135 "assigned_rate_limits": { 00:11:38.135 "rw_ios_per_sec": 0, 00:11:38.135 "rw_mbytes_per_sec": 0, 00:11:38.135 "r_mbytes_per_sec": 0, 00:11:38.135 "w_mbytes_per_sec": 0 00:11:38.135 }, 00:11:38.135 "claimed": false, 00:11:38.135 "zoned": false, 00:11:38.135 "supported_io_types": { 00:11:38.135 "read": true, 00:11:38.135 "write": true, 00:11:38.135 "unmap": true, 00:11:38.135 "flush": true, 00:11:38.135 "reset": true, 00:11:38.135 "nvme_admin": false, 00:11:38.135 "nvme_io": false, 00:11:38.135 "nvme_io_md": false, 00:11:38.135 "write_zeroes": true, 00:11:38.135 "zcopy": true, 00:11:38.135 "get_zone_info": false, 00:11:38.135 "zone_management": false, 00:11:38.135 "zone_append": false, 00:11:38.135 "compare": false, 00:11:38.135 "compare_and_write": false, 00:11:38.135 "abort": true, 00:11:38.135 "seek_hole": false, 00:11:38.135 "seek_data": false, 00:11:38.135 "copy": true, 00:11:38.135 "nvme_iov_md": false 00:11:38.135 }, 00:11:38.135 "memory_domains": [ 00:11:38.135 { 00:11:38.135 "dma_device_id": "system", 00:11:38.135 "dma_device_type": 1 00:11:38.135 }, 00:11:38.135 { 00:11:38.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.135 "dma_device_type": 2 00:11:38.135 } 00:11:38.135 ], 00:11:38.135 "driver_specific": {} 00:11:38.135 } 00:11:38.135 ] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.135 BaseBdev4 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.135 [ 00:11:38.135 { 00:11:38.135 "name": "BaseBdev4", 00:11:38.135 "aliases": [ 00:11:38.135 "3e224098-db08-4fbc-bb5d-88142ee33505" 00:11:38.135 ], 00:11:38.135 "product_name": "Malloc disk", 00:11:38.135 "block_size": 512, 00:11:38.135 "num_blocks": 65536, 00:11:38.135 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:38.135 "assigned_rate_limits": { 00:11:38.135 "rw_ios_per_sec": 0, 00:11:38.135 "rw_mbytes_per_sec": 0, 00:11:38.135 "r_mbytes_per_sec": 0, 00:11:38.135 "w_mbytes_per_sec": 0 00:11:38.135 }, 00:11:38.135 "claimed": false, 00:11:38.135 "zoned": false, 00:11:38.135 "supported_io_types": { 00:11:38.135 "read": true, 00:11:38.135 "write": true, 00:11:38.135 "unmap": true, 00:11:38.135 "flush": true, 00:11:38.135 "reset": true, 00:11:38.135 "nvme_admin": false, 00:11:38.135 "nvme_io": false, 00:11:38.135 "nvme_io_md": false, 00:11:38.135 "write_zeroes": true, 00:11:38.135 "zcopy": true, 00:11:38.135 "get_zone_info": false, 00:11:38.135 "zone_management": false, 00:11:38.135 "zone_append": false, 00:11:38.135 "compare": false, 00:11:38.135 "compare_and_write": false, 00:11:38.135 "abort": true, 00:11:38.135 "seek_hole": false, 00:11:38.135 "seek_data": false, 00:11:38.135 "copy": true, 00:11:38.135 "nvme_iov_md": false 00:11:38.135 }, 00:11:38.135 "memory_domains": [ 00:11:38.135 { 00:11:38.135 "dma_device_id": "system", 00:11:38.135 "dma_device_type": 1 00:11:38.135 }, 00:11:38.135 { 00:11:38.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.135 "dma_device_type": 2 00:11:38.135 } 00:11:38.135 ], 00:11:38.135 "driver_specific": {} 00:11:38.135 } 00:11:38.135 ] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.135 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.135 [2024-09-28 16:12:52.752893] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.135 [2024-09-28 16:12:52.753008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.135 [2024-09-28 16:12:52.753050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.135 [2024-09-28 16:12:52.755196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.135 [2024-09-28 16:12:52.755308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.136 "name": "Existed_Raid", 00:11:38.136 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:38.136 "strip_size_kb": 64, 00:11:38.136 "state": "configuring", 00:11:38.136 "raid_level": "concat", 00:11:38.136 "superblock": true, 00:11:38.136 "num_base_bdevs": 4, 00:11:38.136 "num_base_bdevs_discovered": 3, 00:11:38.136 "num_base_bdevs_operational": 4, 00:11:38.136 "base_bdevs_list": [ 00:11:38.136 { 00:11:38.136 "name": "BaseBdev1", 00:11:38.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.136 "is_configured": false, 00:11:38.136 "data_offset": 0, 00:11:38.136 "data_size": 0 00:11:38.136 }, 00:11:38.136 { 00:11:38.136 "name": "BaseBdev2", 00:11:38.136 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:38.136 "is_configured": true, 00:11:38.136 "data_offset": 2048, 00:11:38.136 "data_size": 63488 00:11:38.136 }, 00:11:38.136 { 00:11:38.136 "name": "BaseBdev3", 00:11:38.136 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:38.136 "is_configured": true, 00:11:38.136 "data_offset": 2048, 00:11:38.136 "data_size": 63488 00:11:38.136 }, 00:11:38.136 { 00:11:38.136 "name": "BaseBdev4", 00:11:38.136 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:38.136 "is_configured": true, 00:11:38.136 "data_offset": 2048, 00:11:38.136 "data_size": 63488 00:11:38.136 } 00:11:38.136 ] 00:11:38.136 }' 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.136 16:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.704 [2024-09-28 16:12:53.204092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.704 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.705 "name": "Existed_Raid", 00:11:38.705 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:38.705 "strip_size_kb": 64, 00:11:38.705 "state": "configuring", 00:11:38.705 "raid_level": "concat", 00:11:38.705 "superblock": true, 00:11:38.705 "num_base_bdevs": 4, 00:11:38.705 "num_base_bdevs_discovered": 2, 00:11:38.705 "num_base_bdevs_operational": 4, 00:11:38.705 "base_bdevs_list": [ 00:11:38.705 { 00:11:38.705 "name": "BaseBdev1", 00:11:38.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.705 "is_configured": false, 00:11:38.705 "data_offset": 0, 00:11:38.705 "data_size": 0 00:11:38.705 }, 00:11:38.705 { 00:11:38.705 "name": null, 00:11:38.705 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:38.705 "is_configured": false, 00:11:38.705 "data_offset": 0, 00:11:38.705 "data_size": 63488 00:11:38.705 }, 00:11:38.705 { 00:11:38.705 "name": "BaseBdev3", 00:11:38.705 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:38.705 "is_configured": true, 00:11:38.705 "data_offset": 2048, 00:11:38.705 "data_size": 63488 00:11:38.705 }, 00:11:38.705 { 00:11:38.705 "name": "BaseBdev4", 00:11:38.705 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:38.705 "is_configured": true, 00:11:38.705 "data_offset": 2048, 00:11:38.705 "data_size": 63488 00:11:38.705 } 00:11:38.705 ] 00:11:38.705 }' 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.705 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.964 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.964 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.964 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.964 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.223 [2024-09-28 16:12:53.726241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.223 BaseBdev1 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.223 [ 00:11:39.223 { 00:11:39.223 "name": "BaseBdev1", 00:11:39.223 "aliases": [ 00:11:39.223 "e2ae584e-b3fb-4a6e-9be5-597dfca82452" 00:11:39.223 ], 00:11:39.223 "product_name": "Malloc disk", 00:11:39.223 "block_size": 512, 00:11:39.223 "num_blocks": 65536, 00:11:39.223 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:39.223 "assigned_rate_limits": { 00:11:39.223 "rw_ios_per_sec": 0, 00:11:39.223 "rw_mbytes_per_sec": 0, 00:11:39.223 "r_mbytes_per_sec": 0, 00:11:39.223 "w_mbytes_per_sec": 0 00:11:39.223 }, 00:11:39.223 "claimed": true, 00:11:39.223 "claim_type": "exclusive_write", 00:11:39.223 "zoned": false, 00:11:39.223 "supported_io_types": { 00:11:39.223 "read": true, 00:11:39.223 "write": true, 00:11:39.223 "unmap": true, 00:11:39.223 "flush": true, 00:11:39.223 "reset": true, 00:11:39.223 "nvme_admin": false, 00:11:39.223 "nvme_io": false, 00:11:39.223 "nvme_io_md": false, 00:11:39.223 "write_zeroes": true, 00:11:39.223 "zcopy": true, 00:11:39.223 "get_zone_info": false, 00:11:39.223 "zone_management": false, 00:11:39.223 "zone_append": false, 00:11:39.223 "compare": false, 00:11:39.223 "compare_and_write": false, 00:11:39.223 "abort": true, 00:11:39.223 "seek_hole": false, 00:11:39.223 "seek_data": false, 00:11:39.223 "copy": true, 00:11:39.223 "nvme_iov_md": false 00:11:39.223 }, 00:11:39.223 "memory_domains": [ 00:11:39.223 { 00:11:39.223 "dma_device_id": "system", 00:11:39.223 "dma_device_type": 1 00:11:39.223 }, 00:11:39.223 { 00:11:39.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.223 "dma_device_type": 2 00:11:39.223 } 00:11:39.223 ], 00:11:39.223 "driver_specific": {} 00:11:39.223 } 00:11:39.223 ] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.223 "name": "Existed_Raid", 00:11:39.223 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:39.223 "strip_size_kb": 64, 00:11:39.223 "state": "configuring", 00:11:39.223 "raid_level": "concat", 00:11:39.223 "superblock": true, 00:11:39.223 "num_base_bdevs": 4, 00:11:39.223 "num_base_bdevs_discovered": 3, 00:11:39.223 "num_base_bdevs_operational": 4, 00:11:39.223 "base_bdevs_list": [ 00:11:39.223 { 00:11:39.223 "name": "BaseBdev1", 00:11:39.223 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:39.223 "is_configured": true, 00:11:39.223 "data_offset": 2048, 00:11:39.223 "data_size": 63488 00:11:39.223 }, 00:11:39.223 { 00:11:39.223 "name": null, 00:11:39.223 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:39.223 "is_configured": false, 00:11:39.223 "data_offset": 0, 00:11:39.223 "data_size": 63488 00:11:39.223 }, 00:11:39.223 { 00:11:39.223 "name": "BaseBdev3", 00:11:39.223 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:39.223 "is_configured": true, 00:11:39.223 "data_offset": 2048, 00:11:39.223 "data_size": 63488 00:11:39.223 }, 00:11:39.223 { 00:11:39.223 "name": "BaseBdev4", 00:11:39.223 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:39.223 "is_configured": true, 00:11:39.223 "data_offset": 2048, 00:11:39.223 "data_size": 63488 00:11:39.223 } 00:11:39.223 ] 00:11:39.223 }' 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.223 16:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.482 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.482 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.482 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.482 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 [2024-09-28 16:12:54.209428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.742 "name": "Existed_Raid", 00:11:39.742 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:39.742 "strip_size_kb": 64, 00:11:39.742 "state": "configuring", 00:11:39.742 "raid_level": "concat", 00:11:39.742 "superblock": true, 00:11:39.742 "num_base_bdevs": 4, 00:11:39.742 "num_base_bdevs_discovered": 2, 00:11:39.742 "num_base_bdevs_operational": 4, 00:11:39.742 "base_bdevs_list": [ 00:11:39.742 { 00:11:39.742 "name": "BaseBdev1", 00:11:39.742 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:39.742 "is_configured": true, 00:11:39.742 "data_offset": 2048, 00:11:39.742 "data_size": 63488 00:11:39.742 }, 00:11:39.742 { 00:11:39.742 "name": null, 00:11:39.742 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:39.742 "is_configured": false, 00:11:39.742 "data_offset": 0, 00:11:39.742 "data_size": 63488 00:11:39.742 }, 00:11:39.742 { 00:11:39.742 "name": null, 00:11:39.742 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:39.742 "is_configured": false, 00:11:39.742 "data_offset": 0, 00:11:39.742 "data_size": 63488 00:11:39.742 }, 00:11:39.742 { 00:11:39.742 "name": "BaseBdev4", 00:11:39.742 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:39.742 "is_configured": true, 00:11:39.742 "data_offset": 2048, 00:11:39.742 "data_size": 63488 00:11:39.742 } 00:11:39.742 ] 00:11:39.742 }' 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.742 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.002 [2024-09-28 16:12:54.668665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.002 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.261 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.261 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.261 "name": "Existed_Raid", 00:11:40.261 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:40.261 "strip_size_kb": 64, 00:11:40.261 "state": "configuring", 00:11:40.261 "raid_level": "concat", 00:11:40.261 "superblock": true, 00:11:40.261 "num_base_bdevs": 4, 00:11:40.261 "num_base_bdevs_discovered": 3, 00:11:40.261 "num_base_bdevs_operational": 4, 00:11:40.261 "base_bdevs_list": [ 00:11:40.261 { 00:11:40.261 "name": "BaseBdev1", 00:11:40.261 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:40.261 "is_configured": true, 00:11:40.261 "data_offset": 2048, 00:11:40.261 "data_size": 63488 00:11:40.261 }, 00:11:40.261 { 00:11:40.261 "name": null, 00:11:40.261 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:40.261 "is_configured": false, 00:11:40.261 "data_offset": 0, 00:11:40.261 "data_size": 63488 00:11:40.261 }, 00:11:40.261 { 00:11:40.261 "name": "BaseBdev3", 00:11:40.261 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:40.261 "is_configured": true, 00:11:40.261 "data_offset": 2048, 00:11:40.261 "data_size": 63488 00:11:40.261 }, 00:11:40.261 { 00:11:40.261 "name": "BaseBdev4", 00:11:40.261 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:40.261 "is_configured": true, 00:11:40.261 "data_offset": 2048, 00:11:40.261 "data_size": 63488 00:11:40.261 } 00:11:40.261 ] 00:11:40.261 }' 00:11:40.261 16:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.261 16:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.520 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.520 [2024-09-28 16:12:55.135861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.779 "name": "Existed_Raid", 00:11:40.779 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:40.779 "strip_size_kb": 64, 00:11:40.779 "state": "configuring", 00:11:40.779 "raid_level": "concat", 00:11:40.779 "superblock": true, 00:11:40.779 "num_base_bdevs": 4, 00:11:40.779 "num_base_bdevs_discovered": 2, 00:11:40.779 "num_base_bdevs_operational": 4, 00:11:40.779 "base_bdevs_list": [ 00:11:40.779 { 00:11:40.779 "name": null, 00:11:40.779 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:40.779 "is_configured": false, 00:11:40.779 "data_offset": 0, 00:11:40.779 "data_size": 63488 00:11:40.779 }, 00:11:40.779 { 00:11:40.779 "name": null, 00:11:40.779 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:40.779 "is_configured": false, 00:11:40.779 "data_offset": 0, 00:11:40.779 "data_size": 63488 00:11:40.779 }, 00:11:40.779 { 00:11:40.779 "name": "BaseBdev3", 00:11:40.779 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:40.779 "is_configured": true, 00:11:40.779 "data_offset": 2048, 00:11:40.779 "data_size": 63488 00:11:40.779 }, 00:11:40.779 { 00:11:40.779 "name": "BaseBdev4", 00:11:40.779 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:40.779 "is_configured": true, 00:11:40.779 "data_offset": 2048, 00:11:40.779 "data_size": 63488 00:11:40.779 } 00:11:40.779 ] 00:11:40.779 }' 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.779 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.038 [2024-09-28 16:12:55.699057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.038 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.039 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.299 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.299 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.299 "name": "Existed_Raid", 00:11:41.299 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:41.299 "strip_size_kb": 64, 00:11:41.299 "state": "configuring", 00:11:41.299 "raid_level": "concat", 00:11:41.299 "superblock": true, 00:11:41.299 "num_base_bdevs": 4, 00:11:41.299 "num_base_bdevs_discovered": 3, 00:11:41.299 "num_base_bdevs_operational": 4, 00:11:41.299 "base_bdevs_list": [ 00:11:41.299 { 00:11:41.299 "name": null, 00:11:41.299 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:41.299 "is_configured": false, 00:11:41.299 "data_offset": 0, 00:11:41.299 "data_size": 63488 00:11:41.299 }, 00:11:41.299 { 00:11:41.299 "name": "BaseBdev2", 00:11:41.299 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:41.299 "is_configured": true, 00:11:41.299 "data_offset": 2048, 00:11:41.299 "data_size": 63488 00:11:41.299 }, 00:11:41.299 { 00:11:41.299 "name": "BaseBdev3", 00:11:41.299 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:41.299 "is_configured": true, 00:11:41.299 "data_offset": 2048, 00:11:41.299 "data_size": 63488 00:11:41.299 }, 00:11:41.299 { 00:11:41.299 "name": "BaseBdev4", 00:11:41.299 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:41.299 "is_configured": true, 00:11:41.299 "data_offset": 2048, 00:11:41.299 "data_size": 63488 00:11:41.299 } 00:11:41.299 ] 00:11:41.299 }' 00:11:41.299 16:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.299 16:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e2ae584e-b3fb-4a6e-9be5-597dfca82452 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.559 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.819 NewBaseBdev 00:11:41.819 [2024-09-28 16:12:56.256665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.819 [2024-09-28 16:12:56.256944] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.819 [2024-09-28 16:12:56.256959] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.819 [2024-09-28 16:12:56.257280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:41.819 [2024-09-28 16:12:56.257428] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.819 [2024-09-28 16:12:56.257440] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.819 [2024-09-28 16:12:56.257581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.819 [ 00:11:41.819 { 00:11:41.819 "name": "NewBaseBdev", 00:11:41.819 "aliases": [ 00:11:41.819 "e2ae584e-b3fb-4a6e-9be5-597dfca82452" 00:11:41.819 ], 00:11:41.819 "product_name": "Malloc disk", 00:11:41.819 "block_size": 512, 00:11:41.819 "num_blocks": 65536, 00:11:41.819 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:41.819 "assigned_rate_limits": { 00:11:41.819 "rw_ios_per_sec": 0, 00:11:41.819 "rw_mbytes_per_sec": 0, 00:11:41.819 "r_mbytes_per_sec": 0, 00:11:41.819 "w_mbytes_per_sec": 0 00:11:41.819 }, 00:11:41.819 "claimed": true, 00:11:41.819 "claim_type": "exclusive_write", 00:11:41.819 "zoned": false, 00:11:41.819 "supported_io_types": { 00:11:41.819 "read": true, 00:11:41.819 "write": true, 00:11:41.819 "unmap": true, 00:11:41.819 "flush": true, 00:11:41.819 "reset": true, 00:11:41.819 "nvme_admin": false, 00:11:41.819 "nvme_io": false, 00:11:41.819 "nvme_io_md": false, 00:11:41.819 "write_zeroes": true, 00:11:41.819 "zcopy": true, 00:11:41.819 "get_zone_info": false, 00:11:41.819 "zone_management": false, 00:11:41.819 "zone_append": false, 00:11:41.819 "compare": false, 00:11:41.819 "compare_and_write": false, 00:11:41.819 "abort": true, 00:11:41.819 "seek_hole": false, 00:11:41.819 "seek_data": false, 00:11:41.819 "copy": true, 00:11:41.819 "nvme_iov_md": false 00:11:41.819 }, 00:11:41.819 "memory_domains": [ 00:11:41.819 { 00:11:41.819 "dma_device_id": "system", 00:11:41.819 "dma_device_type": 1 00:11:41.819 }, 00:11:41.819 { 00:11:41.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.819 "dma_device_type": 2 00:11:41.819 } 00:11:41.819 ], 00:11:41.819 "driver_specific": {} 00:11:41.819 } 00:11:41.819 ] 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.819 "name": "Existed_Raid", 00:11:41.819 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:41.819 "strip_size_kb": 64, 00:11:41.819 "state": "online", 00:11:41.819 "raid_level": "concat", 00:11:41.819 "superblock": true, 00:11:41.819 "num_base_bdevs": 4, 00:11:41.819 "num_base_bdevs_discovered": 4, 00:11:41.819 "num_base_bdevs_operational": 4, 00:11:41.819 "base_bdevs_list": [ 00:11:41.819 { 00:11:41.819 "name": "NewBaseBdev", 00:11:41.819 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:41.819 "is_configured": true, 00:11:41.819 "data_offset": 2048, 00:11:41.819 "data_size": 63488 00:11:41.819 }, 00:11:41.819 { 00:11:41.819 "name": "BaseBdev2", 00:11:41.819 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:41.819 "is_configured": true, 00:11:41.819 "data_offset": 2048, 00:11:41.819 "data_size": 63488 00:11:41.819 }, 00:11:41.819 { 00:11:41.819 "name": "BaseBdev3", 00:11:41.819 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:41.819 "is_configured": true, 00:11:41.819 "data_offset": 2048, 00:11:41.819 "data_size": 63488 00:11:41.819 }, 00:11:41.819 { 00:11:41.819 "name": "BaseBdev4", 00:11:41.819 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:41.819 "is_configured": true, 00:11:41.819 "data_offset": 2048, 00:11:41.819 "data_size": 63488 00:11:41.819 } 00:11:41.819 ] 00:11:41.819 }' 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.819 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.079 [2024-09-28 16:12:56.708262] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.079 "name": "Existed_Raid", 00:11:42.079 "aliases": [ 00:11:42.079 "b3305679-114d-43c9-b362-61462eacf90a" 00:11:42.079 ], 00:11:42.079 "product_name": "Raid Volume", 00:11:42.079 "block_size": 512, 00:11:42.079 "num_blocks": 253952, 00:11:42.079 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:42.079 "assigned_rate_limits": { 00:11:42.079 "rw_ios_per_sec": 0, 00:11:42.079 "rw_mbytes_per_sec": 0, 00:11:42.079 "r_mbytes_per_sec": 0, 00:11:42.079 "w_mbytes_per_sec": 0 00:11:42.079 }, 00:11:42.079 "claimed": false, 00:11:42.079 "zoned": false, 00:11:42.079 "supported_io_types": { 00:11:42.079 "read": true, 00:11:42.079 "write": true, 00:11:42.079 "unmap": true, 00:11:42.079 "flush": true, 00:11:42.079 "reset": true, 00:11:42.079 "nvme_admin": false, 00:11:42.079 "nvme_io": false, 00:11:42.079 "nvme_io_md": false, 00:11:42.079 "write_zeroes": true, 00:11:42.079 "zcopy": false, 00:11:42.079 "get_zone_info": false, 00:11:42.079 "zone_management": false, 00:11:42.079 "zone_append": false, 00:11:42.079 "compare": false, 00:11:42.079 "compare_and_write": false, 00:11:42.079 "abort": false, 00:11:42.079 "seek_hole": false, 00:11:42.079 "seek_data": false, 00:11:42.079 "copy": false, 00:11:42.079 "nvme_iov_md": false 00:11:42.079 }, 00:11:42.079 "memory_domains": [ 00:11:42.079 { 00:11:42.079 "dma_device_id": "system", 00:11:42.079 "dma_device_type": 1 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.079 "dma_device_type": 2 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "dma_device_id": "system", 00:11:42.079 "dma_device_type": 1 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.079 "dma_device_type": 2 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "dma_device_id": "system", 00:11:42.079 "dma_device_type": 1 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.079 "dma_device_type": 2 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "dma_device_id": "system", 00:11:42.079 "dma_device_type": 1 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.079 "dma_device_type": 2 00:11:42.079 } 00:11:42.079 ], 00:11:42.079 "driver_specific": { 00:11:42.079 "raid": { 00:11:42.079 "uuid": "b3305679-114d-43c9-b362-61462eacf90a", 00:11:42.079 "strip_size_kb": 64, 00:11:42.079 "state": "online", 00:11:42.079 "raid_level": "concat", 00:11:42.079 "superblock": true, 00:11:42.079 "num_base_bdevs": 4, 00:11:42.079 "num_base_bdevs_discovered": 4, 00:11:42.079 "num_base_bdevs_operational": 4, 00:11:42.079 "base_bdevs_list": [ 00:11:42.079 { 00:11:42.079 "name": "NewBaseBdev", 00:11:42.079 "uuid": "e2ae584e-b3fb-4a6e-9be5-597dfca82452", 00:11:42.079 "is_configured": true, 00:11:42.079 "data_offset": 2048, 00:11:42.079 "data_size": 63488 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "name": "BaseBdev2", 00:11:42.079 "uuid": "6f086a66-64e7-490e-9b98-2cdece277b69", 00:11:42.079 "is_configured": true, 00:11:42.079 "data_offset": 2048, 00:11:42.079 "data_size": 63488 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "name": "BaseBdev3", 00:11:42.079 "uuid": "1c0ec165-df88-401a-b046-5c2d923ac39f", 00:11:42.079 "is_configured": true, 00:11:42.079 "data_offset": 2048, 00:11:42.079 "data_size": 63488 00:11:42.079 }, 00:11:42.079 { 00:11:42.079 "name": "BaseBdev4", 00:11:42.079 "uuid": "3e224098-db08-4fbc-bb5d-88142ee33505", 00:11:42.079 "is_configured": true, 00:11:42.079 "data_offset": 2048, 00:11:42.079 "data_size": 63488 00:11:42.079 } 00:11:42.079 ] 00:11:42.079 } 00:11:42.079 } 00:11:42.079 }' 00:11:42.079 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.338 BaseBdev2 00:11:42.338 BaseBdev3 00:11:42.338 BaseBdev4' 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.338 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.339 16:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.339 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.598 [2024-09-28 16:12:57.035328] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.598 [2024-09-28 16:12:57.035362] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.598 [2024-09-28 16:12:57.035445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.598 [2024-09-28 16:12:57.035527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.598 [2024-09-28 16:12:57.035541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71992 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71992 ']' 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71992 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71992 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.598 killing process with pid 71992 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71992' 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71992 00:11:42.598 [2024-09-28 16:12:57.087666] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.598 16:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71992 00:11:42.857 [2024-09-28 16:12:57.495602] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.233 16:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.233 00:11:44.233 real 0m11.631s 00:11:44.233 user 0m18.071s 00:11:44.233 sys 0m2.240s 00:11:44.233 16:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.233 16:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.233 ************************************ 00:11:44.233 END TEST raid_state_function_test_sb 00:11:44.233 ************************************ 00:11:44.233 16:12:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:44.233 16:12:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:44.233 16:12:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.233 16:12:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.233 ************************************ 00:11:44.233 START TEST raid_superblock_test 00:11:44.233 ************************************ 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72658 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72658 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72658 ']' 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.233 16:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.492 [2024-09-28 16:12:58.991933] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:44.492 [2024-09-28 16:12:58.992060] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72658 ] 00:11:44.492 [2024-09-28 16:12:59.158555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.751 [2024-09-28 16:12:59.406790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.008 [2024-09-28 16:12:59.635909] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.008 [2024-09-28 16:12:59.635943] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.271 malloc1 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.271 [2024-09-28 16:12:59.871613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.271 [2024-09-28 16:12:59.871683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.271 [2024-09-28 16:12:59.871707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.271 [2024-09-28 16:12:59.871720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.271 [2024-09-28 16:12:59.874082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.271 [2024-09-28 16:12:59.874119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.271 pt1 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.271 malloc2 00:11:45.271 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.272 [2024-09-28 16:12:59.938558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.272 [2024-09-28 16:12:59.938619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.272 [2024-09-28 16:12:59.938642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.272 [2024-09-28 16:12:59.938651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.272 [2024-09-28 16:12:59.941015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.272 [2024-09-28 16:12:59.941051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.272 pt2 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.272 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.532 malloc3 00:11:45.532 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.532 16:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.532 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.532 16:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.532 [2024-09-28 16:12:59.999040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.532 [2024-09-28 16:12:59.999096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.532 [2024-09-28 16:12:59.999116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:45.532 [2024-09-28 16:12:59.999126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.532 [2024-09-28 16:13:00.001489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.532 [2024-09-28 16:13:00.001523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.532 pt3 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.532 malloc4 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.532 [2024-09-28 16:13:00.059528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:45.532 [2024-09-28 16:13:00.059583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.532 [2024-09-28 16:13:00.059604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:45.532 [2024-09-28 16:13:00.059613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.532 [2024-09-28 16:13:00.061932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.532 [2024-09-28 16:13:00.061964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:45.532 pt4 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.532 [2024-09-28 16:13:00.071573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.532 [2024-09-28 16:13:00.073639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.532 [2024-09-28 16:13:00.073707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.532 [2024-09-28 16:13:00.073771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:45.532 [2024-09-28 16:13:00.073980] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:45.532 [2024-09-28 16:13:00.074003] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.532 [2024-09-28 16:13:00.074296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:45.532 [2024-09-28 16:13:00.074473] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:45.532 [2024-09-28 16:13:00.074492] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:45.532 [2024-09-28 16:13:00.074641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.532 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.533 "name": "raid_bdev1", 00:11:45.533 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:45.533 "strip_size_kb": 64, 00:11:45.533 "state": "online", 00:11:45.533 "raid_level": "concat", 00:11:45.533 "superblock": true, 00:11:45.533 "num_base_bdevs": 4, 00:11:45.533 "num_base_bdevs_discovered": 4, 00:11:45.533 "num_base_bdevs_operational": 4, 00:11:45.533 "base_bdevs_list": [ 00:11:45.533 { 00:11:45.533 "name": "pt1", 00:11:45.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.533 "is_configured": true, 00:11:45.533 "data_offset": 2048, 00:11:45.533 "data_size": 63488 00:11:45.533 }, 00:11:45.533 { 00:11:45.533 "name": "pt2", 00:11:45.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.533 "is_configured": true, 00:11:45.533 "data_offset": 2048, 00:11:45.533 "data_size": 63488 00:11:45.533 }, 00:11:45.533 { 00:11:45.533 "name": "pt3", 00:11:45.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.533 "is_configured": true, 00:11:45.533 "data_offset": 2048, 00:11:45.533 "data_size": 63488 00:11:45.533 }, 00:11:45.533 { 00:11:45.533 "name": "pt4", 00:11:45.533 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.533 "is_configured": true, 00:11:45.533 "data_offset": 2048, 00:11:45.533 "data_size": 63488 00:11:45.533 } 00:11:45.533 ] 00:11:45.533 }' 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.533 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.102 [2024-09-28 16:13:00.495159] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.102 "name": "raid_bdev1", 00:11:46.102 "aliases": [ 00:11:46.102 "83b560df-7e0d-4249-816e-c8e99b4b3366" 00:11:46.102 ], 00:11:46.102 "product_name": "Raid Volume", 00:11:46.102 "block_size": 512, 00:11:46.102 "num_blocks": 253952, 00:11:46.102 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:46.102 "assigned_rate_limits": { 00:11:46.102 "rw_ios_per_sec": 0, 00:11:46.102 "rw_mbytes_per_sec": 0, 00:11:46.102 "r_mbytes_per_sec": 0, 00:11:46.102 "w_mbytes_per_sec": 0 00:11:46.102 }, 00:11:46.102 "claimed": false, 00:11:46.102 "zoned": false, 00:11:46.102 "supported_io_types": { 00:11:46.102 "read": true, 00:11:46.102 "write": true, 00:11:46.102 "unmap": true, 00:11:46.102 "flush": true, 00:11:46.102 "reset": true, 00:11:46.102 "nvme_admin": false, 00:11:46.102 "nvme_io": false, 00:11:46.102 "nvme_io_md": false, 00:11:46.102 "write_zeroes": true, 00:11:46.102 "zcopy": false, 00:11:46.102 "get_zone_info": false, 00:11:46.102 "zone_management": false, 00:11:46.102 "zone_append": false, 00:11:46.102 "compare": false, 00:11:46.102 "compare_and_write": false, 00:11:46.102 "abort": false, 00:11:46.102 "seek_hole": false, 00:11:46.102 "seek_data": false, 00:11:46.102 "copy": false, 00:11:46.102 "nvme_iov_md": false 00:11:46.102 }, 00:11:46.102 "memory_domains": [ 00:11:46.102 { 00:11:46.102 "dma_device_id": "system", 00:11:46.102 "dma_device_type": 1 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.102 "dma_device_type": 2 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "dma_device_id": "system", 00:11:46.102 "dma_device_type": 1 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.102 "dma_device_type": 2 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "dma_device_id": "system", 00:11:46.102 "dma_device_type": 1 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.102 "dma_device_type": 2 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "dma_device_id": "system", 00:11:46.102 "dma_device_type": 1 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.102 "dma_device_type": 2 00:11:46.102 } 00:11:46.102 ], 00:11:46.102 "driver_specific": { 00:11:46.102 "raid": { 00:11:46.102 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:46.102 "strip_size_kb": 64, 00:11:46.102 "state": "online", 00:11:46.102 "raid_level": "concat", 00:11:46.102 "superblock": true, 00:11:46.102 "num_base_bdevs": 4, 00:11:46.102 "num_base_bdevs_discovered": 4, 00:11:46.102 "num_base_bdevs_operational": 4, 00:11:46.102 "base_bdevs_list": [ 00:11:46.102 { 00:11:46.102 "name": "pt1", 00:11:46.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 2048, 00:11:46.102 "data_size": 63488 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "name": "pt2", 00:11:46.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 2048, 00:11:46.102 "data_size": 63488 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "name": "pt3", 00:11:46.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 2048, 00:11:46.102 "data_size": 63488 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "name": "pt4", 00:11:46.102 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 2048, 00:11:46.102 "data_size": 63488 00:11:46.102 } 00:11:46.102 ] 00:11:46.102 } 00:11:46.102 } 00:11:46.102 }' 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.102 pt2 00:11:46.102 pt3 00:11:46.102 pt4' 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.102 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:46.103 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 [2024-09-28 16:13:00.786585] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=83b560df-7e0d-4249-816e-c8e99b4b3366 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 83b560df-7e0d-4249-816e-c8e99b4b3366 ']' 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 [2024-09-28 16:13:00.834237] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.363 [2024-09-28 16:13:00.834266] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.363 [2024-09-28 16:13:00.834343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.363 [2024-09-28 16:13:00.834429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.363 [2024-09-28 16:13:00.834451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 [2024-09-28 16:13:00.993963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.363 [2024-09-28 16:13:00.996118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.363 [2024-09-28 16:13:00.996171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:46.363 [2024-09-28 16:13:00.996205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:46.363 [2024-09-28 16:13:00.996274] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:46.363 [2024-09-28 16:13:00.996318] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:46.363 [2024-09-28 16:13:00.996337] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:46.363 [2024-09-28 16:13:00.996355] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:46.363 [2024-09-28 16:13:00.996369] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.363 [2024-09-28 16:13:00.996380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:46.363 request: 00:11:46.363 { 00:11:46.363 "name": "raid_bdev1", 00:11:46.363 "raid_level": "concat", 00:11:46.363 "base_bdevs": [ 00:11:46.363 "malloc1", 00:11:46.363 "malloc2", 00:11:46.363 "malloc3", 00:11:46.363 "malloc4" 00:11:46.363 ], 00:11:46.363 "strip_size_kb": 64, 00:11:46.363 "superblock": false, 00:11:46.363 "method": "bdev_raid_create", 00:11:46.363 "req_id": 1 00:11:46.363 } 00:11:46.363 Got JSON-RPC error response 00:11:46.363 response: 00:11:46.363 { 00:11:46.363 "code": -17, 00:11:46.363 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.363 } 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.624 [2024-09-28 16:13:01.057831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.624 [2024-09-28 16:13:01.057879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.624 [2024-09-28 16:13:01.057894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:46.624 [2024-09-28 16:13:01.057905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.624 [2024-09-28 16:13:01.060486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.624 [2024-09-28 16:13:01.060523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.624 [2024-09-28 16:13:01.060593] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:46.624 [2024-09-28 16:13:01.060652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.624 pt1 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.624 "name": "raid_bdev1", 00:11:46.624 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:46.624 "strip_size_kb": 64, 00:11:46.624 "state": "configuring", 00:11:46.624 "raid_level": "concat", 00:11:46.624 "superblock": true, 00:11:46.624 "num_base_bdevs": 4, 00:11:46.624 "num_base_bdevs_discovered": 1, 00:11:46.624 "num_base_bdevs_operational": 4, 00:11:46.624 "base_bdevs_list": [ 00:11:46.624 { 00:11:46.624 "name": "pt1", 00:11:46.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.624 "is_configured": true, 00:11:46.624 "data_offset": 2048, 00:11:46.624 "data_size": 63488 00:11:46.624 }, 00:11:46.624 { 00:11:46.624 "name": null, 00:11:46.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.624 "is_configured": false, 00:11:46.624 "data_offset": 2048, 00:11:46.624 "data_size": 63488 00:11:46.624 }, 00:11:46.624 { 00:11:46.624 "name": null, 00:11:46.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.624 "is_configured": false, 00:11:46.624 "data_offset": 2048, 00:11:46.624 "data_size": 63488 00:11:46.624 }, 00:11:46.624 { 00:11:46.624 "name": null, 00:11:46.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.624 "is_configured": false, 00:11:46.624 "data_offset": 2048, 00:11:46.624 "data_size": 63488 00:11:46.624 } 00:11:46.624 ] 00:11:46.624 }' 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.624 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.885 [2024-09-28 16:13:01.501114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.885 [2024-09-28 16:13:01.501191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.885 [2024-09-28 16:13:01.501212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:46.885 [2024-09-28 16:13:01.501234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.885 [2024-09-28 16:13:01.501761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.885 [2024-09-28 16:13:01.501784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.885 [2024-09-28 16:13:01.501874] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.885 [2024-09-28 16:13:01.501900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.885 pt2 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.885 [2024-09-28 16:13:01.513105] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.885 "name": "raid_bdev1", 00:11:46.885 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:46.885 "strip_size_kb": 64, 00:11:46.885 "state": "configuring", 00:11:46.885 "raid_level": "concat", 00:11:46.885 "superblock": true, 00:11:46.885 "num_base_bdevs": 4, 00:11:46.885 "num_base_bdevs_discovered": 1, 00:11:46.885 "num_base_bdevs_operational": 4, 00:11:46.885 "base_bdevs_list": [ 00:11:46.885 { 00:11:46.885 "name": "pt1", 00:11:46.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.885 "is_configured": true, 00:11:46.885 "data_offset": 2048, 00:11:46.885 "data_size": 63488 00:11:46.885 }, 00:11:46.885 { 00:11:46.885 "name": null, 00:11:46.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.885 "is_configured": false, 00:11:46.885 "data_offset": 0, 00:11:46.885 "data_size": 63488 00:11:46.885 }, 00:11:46.885 { 00:11:46.885 "name": null, 00:11:46.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.885 "is_configured": false, 00:11:46.885 "data_offset": 2048, 00:11:46.885 "data_size": 63488 00:11:46.885 }, 00:11:46.885 { 00:11:46.885 "name": null, 00:11:46.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.885 "is_configured": false, 00:11:46.885 "data_offset": 2048, 00:11:46.885 "data_size": 63488 00:11:46.885 } 00:11:46.885 ] 00:11:46.885 }' 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.885 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:47.454 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.454 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.454 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.454 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 [2024-09-28 16:13:01.936323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.454 [2024-09-28 16:13:01.936382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.454 [2024-09-28 16:13:01.936404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:47.454 [2024-09-28 16:13:01.936413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.454 [2024-09-28 16:13:01.936895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.454 [2024-09-28 16:13:01.936911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.454 [2024-09-28 16:13:01.936998] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.455 [2024-09-28 16:13:01.937028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.455 pt2 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 [2024-09-28 16:13:01.948300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.455 [2024-09-28 16:13:01.948348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.455 [2024-09-28 16:13:01.948374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:47.455 [2024-09-28 16:13:01.948384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.455 [2024-09-28 16:13:01.948753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.455 [2024-09-28 16:13:01.948772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.455 [2024-09-28 16:13:01.948833] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:47.455 [2024-09-28 16:13:01.948849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.455 pt3 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 [2024-09-28 16:13:01.960250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.455 [2024-09-28 16:13:01.960293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.455 [2024-09-28 16:13:01.960311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:47.455 [2024-09-28 16:13:01.960318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.455 [2024-09-28 16:13:01.960689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.455 [2024-09-28 16:13:01.960702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.455 [2024-09-28 16:13:01.960761] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.455 [2024-09-28 16:13:01.960783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.455 [2024-09-28 16:13:01.960914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.455 [2024-09-28 16:13:01.960923] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.455 [2024-09-28 16:13:01.961188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:47.455 [2024-09-28 16:13:01.961347] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.455 [2024-09-28 16:13:01.961365] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:47.455 [2024-09-28 16:13:01.961492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.455 pt4 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 16:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.455 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.455 "name": "raid_bdev1", 00:11:47.455 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:47.455 "strip_size_kb": 64, 00:11:47.455 "state": "online", 00:11:47.455 "raid_level": "concat", 00:11:47.455 "superblock": true, 00:11:47.455 "num_base_bdevs": 4, 00:11:47.455 "num_base_bdevs_discovered": 4, 00:11:47.455 "num_base_bdevs_operational": 4, 00:11:47.455 "base_bdevs_list": [ 00:11:47.455 { 00:11:47.455 "name": "pt1", 00:11:47.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.455 "is_configured": true, 00:11:47.455 "data_offset": 2048, 00:11:47.455 "data_size": 63488 00:11:47.455 }, 00:11:47.455 { 00:11:47.455 "name": "pt2", 00:11:47.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.455 "is_configured": true, 00:11:47.455 "data_offset": 2048, 00:11:47.455 "data_size": 63488 00:11:47.455 }, 00:11:47.455 { 00:11:47.455 "name": "pt3", 00:11:47.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.455 "is_configured": true, 00:11:47.455 "data_offset": 2048, 00:11:47.455 "data_size": 63488 00:11:47.455 }, 00:11:47.455 { 00:11:47.455 "name": "pt4", 00:11:47.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.455 "is_configured": true, 00:11:47.455 "data_offset": 2048, 00:11:47.455 "data_size": 63488 00:11:47.455 } 00:11:47.455 ] 00:11:47.455 }' 00:11:47.455 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.455 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.025 [2024-09-28 16:13:02.475705] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.025 "name": "raid_bdev1", 00:11:48.025 "aliases": [ 00:11:48.025 "83b560df-7e0d-4249-816e-c8e99b4b3366" 00:11:48.025 ], 00:11:48.025 "product_name": "Raid Volume", 00:11:48.025 "block_size": 512, 00:11:48.025 "num_blocks": 253952, 00:11:48.025 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:48.025 "assigned_rate_limits": { 00:11:48.025 "rw_ios_per_sec": 0, 00:11:48.025 "rw_mbytes_per_sec": 0, 00:11:48.025 "r_mbytes_per_sec": 0, 00:11:48.025 "w_mbytes_per_sec": 0 00:11:48.025 }, 00:11:48.025 "claimed": false, 00:11:48.025 "zoned": false, 00:11:48.025 "supported_io_types": { 00:11:48.025 "read": true, 00:11:48.025 "write": true, 00:11:48.025 "unmap": true, 00:11:48.025 "flush": true, 00:11:48.025 "reset": true, 00:11:48.025 "nvme_admin": false, 00:11:48.025 "nvme_io": false, 00:11:48.025 "nvme_io_md": false, 00:11:48.025 "write_zeroes": true, 00:11:48.025 "zcopy": false, 00:11:48.025 "get_zone_info": false, 00:11:48.025 "zone_management": false, 00:11:48.025 "zone_append": false, 00:11:48.025 "compare": false, 00:11:48.025 "compare_and_write": false, 00:11:48.025 "abort": false, 00:11:48.025 "seek_hole": false, 00:11:48.025 "seek_data": false, 00:11:48.025 "copy": false, 00:11:48.025 "nvme_iov_md": false 00:11:48.025 }, 00:11:48.025 "memory_domains": [ 00:11:48.025 { 00:11:48.025 "dma_device_id": "system", 00:11:48.025 "dma_device_type": 1 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.025 "dma_device_type": 2 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "dma_device_id": "system", 00:11:48.025 "dma_device_type": 1 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.025 "dma_device_type": 2 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "dma_device_id": "system", 00:11:48.025 "dma_device_type": 1 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.025 "dma_device_type": 2 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "dma_device_id": "system", 00:11:48.025 "dma_device_type": 1 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.025 "dma_device_type": 2 00:11:48.025 } 00:11:48.025 ], 00:11:48.025 "driver_specific": { 00:11:48.025 "raid": { 00:11:48.025 "uuid": "83b560df-7e0d-4249-816e-c8e99b4b3366", 00:11:48.025 "strip_size_kb": 64, 00:11:48.025 "state": "online", 00:11:48.025 "raid_level": "concat", 00:11:48.025 "superblock": true, 00:11:48.025 "num_base_bdevs": 4, 00:11:48.025 "num_base_bdevs_discovered": 4, 00:11:48.025 "num_base_bdevs_operational": 4, 00:11:48.025 "base_bdevs_list": [ 00:11:48.025 { 00:11:48.025 "name": "pt1", 00:11:48.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "name": "pt2", 00:11:48.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "name": "pt3", 00:11:48.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "name": "pt4", 00:11:48.025 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 } 00:11:48.025 ] 00:11:48.025 } 00:11:48.025 } 00:11:48.025 }' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.025 pt2 00:11:48.025 pt3 00:11:48.025 pt4' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.025 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:48.285 [2024-09-28 16:13:02.759161] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 83b560df-7e0d-4249-816e-c8e99b4b3366 '!=' 83b560df-7e0d-4249-816e-c8e99b4b3366 ']' 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72658 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72658 ']' 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72658 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72658 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.285 killing process with pid 72658 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72658' 00:11:48.285 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72658 00:11:48.285 [2024-09-28 16:13:02.836346] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.285 [2024-09-28 16:13:02.836436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.285 [2024-09-28 16:13:02.836512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.285 [2024-09-28 16:13:02.836523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta 16:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72658 00:11:48.285 te offline 00:11:48.854 [2024-09-28 16:13:03.264773] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.235 16:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:50.235 00:11:50.235 real 0m5.728s 00:11:50.235 user 0m7.842s 00:11:50.235 sys 0m1.116s 00:11:50.235 16:13:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.235 16:13:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.235 ************************************ 00:11:50.235 END TEST raid_superblock_test 00:11:50.235 ************************************ 00:11:50.235 16:13:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:50.235 16:13:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:50.235 16:13:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.235 16:13:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.235 ************************************ 00:11:50.235 START TEST raid_read_error_test 00:11:50.235 ************************************ 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.e4EKm8d39i 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72923 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72923 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72923 ']' 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.235 16:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.235 [2024-09-28 16:13:04.809133] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:50.235 [2024-09-28 16:13:04.809265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72923 ] 00:11:50.505 [2024-09-28 16:13:04.975511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.833 [2024-09-28 16:13:05.222037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.833 [2024-09-28 16:13:05.452769] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.833 [2024-09-28 16:13:05.452803] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.121 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.122 BaseBdev1_malloc 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.122 true 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.122 [2024-09-28 16:13:05.700319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:51.122 [2024-09-28 16:13:05.700386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.122 [2024-09-28 16:13:05.700405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:51.122 [2024-09-28 16:13:05.700418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.122 [2024-09-28 16:13:05.702783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.122 [2024-09-28 16:13:05.702818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.122 BaseBdev1 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.122 BaseBdev2_malloc 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.122 true 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.122 [2024-09-28 16:13:05.784407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.122 [2024-09-28 16:13:05.784463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.122 [2024-09-28 16:13:05.784479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.122 [2024-09-28 16:13:05.784491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.122 [2024-09-28 16:13:05.786810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.122 [2024-09-28 16:13:05.786844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.122 BaseBdev2 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.122 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 BaseBdev3_malloc 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 true 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 [2024-09-28 16:13:05.858152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:51.383 [2024-09-28 16:13:05.858206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.383 [2024-09-28 16:13:05.858236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:51.383 [2024-09-28 16:13:05.858249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.383 [2024-09-28 16:13:05.860669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.383 [2024-09-28 16:13:05.860705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.383 BaseBdev3 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 BaseBdev4_malloc 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 true 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 [2024-09-28 16:13:05.931810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:51.383 [2024-09-28 16:13:05.931865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.383 [2024-09-28 16:13:05.931884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:51.383 [2024-09-28 16:13:05.931895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.383 [2024-09-28 16:13:05.934281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.383 [2024-09-28 16:13:05.934317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:51.383 BaseBdev4 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 [2024-09-28 16:13:05.943863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.383 [2024-09-28 16:13:05.945878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.383 [2024-09-28 16:13:05.945956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.383 [2024-09-28 16:13:05.946013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.383 [2024-09-28 16:13:05.946246] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:51.383 [2024-09-28 16:13:05.946266] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:51.383 [2024-09-28 16:13:05.946509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:51.383 [2024-09-28 16:13:05.946675] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:51.383 [2024-09-28 16:13:05.946688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:51.383 [2024-09-28 16:13:05.946839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 16:13:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 16:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.383 "name": "raid_bdev1", 00:11:51.383 "uuid": "c5e638fe-23a9-46a6-a4d4-f3394e583b89", 00:11:51.383 "strip_size_kb": 64, 00:11:51.383 "state": "online", 00:11:51.383 "raid_level": "concat", 00:11:51.383 "superblock": true, 00:11:51.383 "num_base_bdevs": 4, 00:11:51.383 "num_base_bdevs_discovered": 4, 00:11:51.383 "num_base_bdevs_operational": 4, 00:11:51.383 "base_bdevs_list": [ 00:11:51.383 { 00:11:51.383 "name": "BaseBdev1", 00:11:51.383 "uuid": "041a2203-f7d7-5b6e-85f2-3f7258102492", 00:11:51.383 "is_configured": true, 00:11:51.383 "data_offset": 2048, 00:11:51.383 "data_size": 63488 00:11:51.383 }, 00:11:51.383 { 00:11:51.383 "name": "BaseBdev2", 00:11:51.383 "uuid": "cd93483e-75a8-5355-a825-20e2d898572b", 00:11:51.383 "is_configured": true, 00:11:51.383 "data_offset": 2048, 00:11:51.383 "data_size": 63488 00:11:51.383 }, 00:11:51.383 { 00:11:51.383 "name": "BaseBdev3", 00:11:51.383 "uuid": "297cd6e7-2dda-52ba-abfd-bee788f09c6c", 00:11:51.383 "is_configured": true, 00:11:51.383 "data_offset": 2048, 00:11:51.383 "data_size": 63488 00:11:51.383 }, 00:11:51.383 { 00:11:51.383 "name": "BaseBdev4", 00:11:51.383 "uuid": "861f3fb7-10f7-5851-860a-529c6dbdb5d7", 00:11:51.383 "is_configured": true, 00:11:51.383 "data_offset": 2048, 00:11:51.383 "data_size": 63488 00:11:51.383 } 00:11:51.383 ] 00:11:51.383 }' 00:11:51.383 16:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.383 16:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.953 16:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.953 16:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.953 [2024-09-28 16:13:06.504387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.892 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.893 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.893 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.893 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.893 "name": "raid_bdev1", 00:11:52.893 "uuid": "c5e638fe-23a9-46a6-a4d4-f3394e583b89", 00:11:52.893 "strip_size_kb": 64, 00:11:52.893 "state": "online", 00:11:52.893 "raid_level": "concat", 00:11:52.893 "superblock": true, 00:11:52.893 "num_base_bdevs": 4, 00:11:52.893 "num_base_bdevs_discovered": 4, 00:11:52.893 "num_base_bdevs_operational": 4, 00:11:52.893 "base_bdevs_list": [ 00:11:52.893 { 00:11:52.893 "name": "BaseBdev1", 00:11:52.893 "uuid": "041a2203-f7d7-5b6e-85f2-3f7258102492", 00:11:52.893 "is_configured": true, 00:11:52.893 "data_offset": 2048, 00:11:52.893 "data_size": 63488 00:11:52.893 }, 00:11:52.893 { 00:11:52.893 "name": "BaseBdev2", 00:11:52.893 "uuid": "cd93483e-75a8-5355-a825-20e2d898572b", 00:11:52.893 "is_configured": true, 00:11:52.893 "data_offset": 2048, 00:11:52.893 "data_size": 63488 00:11:52.893 }, 00:11:52.893 { 00:11:52.893 "name": "BaseBdev3", 00:11:52.893 "uuid": "297cd6e7-2dda-52ba-abfd-bee788f09c6c", 00:11:52.893 "is_configured": true, 00:11:52.893 "data_offset": 2048, 00:11:52.893 "data_size": 63488 00:11:52.893 }, 00:11:52.893 { 00:11:52.893 "name": "BaseBdev4", 00:11:52.893 "uuid": "861f3fb7-10f7-5851-860a-529c6dbdb5d7", 00:11:52.893 "is_configured": true, 00:11:52.893 "data_offset": 2048, 00:11:52.893 "data_size": 63488 00:11:52.893 } 00:11:52.893 ] 00:11:52.893 }' 00:11:52.893 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.893 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.464 [2024-09-28 16:13:07.857190] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.464 [2024-09-28 16:13:07.857243] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.464 [2024-09-28 16:13:07.859873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.464 [2024-09-28 16:13:07.859940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.464 [2024-09-28 16:13:07.859988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.464 [2024-09-28 16:13:07.860002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:53.464 { 00:11:53.464 "results": [ 00:11:53.464 { 00:11:53.464 "job": "raid_bdev1", 00:11:53.464 "core_mask": "0x1", 00:11:53.464 "workload": "randrw", 00:11:53.464 "percentage": 50, 00:11:53.464 "status": "finished", 00:11:53.464 "queue_depth": 1, 00:11:53.464 "io_size": 131072, 00:11:53.464 "runtime": 1.353366, 00:11:53.464 "iops": 14154.338146517646, 00:11:53.464 "mibps": 1769.2922683147058, 00:11:53.464 "io_failed": 1, 00:11:53.464 "io_timeout": 0, 00:11:53.464 "avg_latency_us": 99.58039851350128, 00:11:53.464 "min_latency_us": 24.482096069868994, 00:11:53.464 "max_latency_us": 1395.1441048034935 00:11:53.464 } 00:11:53.464 ], 00:11:53.464 "core_count": 1 00:11:53.464 } 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72923 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72923 ']' 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72923 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72923 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72923' 00:11:53.464 killing process with pid 72923 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72923 00:11:53.464 [2024-09-28 16:13:07.901638] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.464 16:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72923 00:11:53.724 [2024-09-28 16:13:08.242495] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.e4EKm8d39i 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:55.105 00:11:55.105 real 0m4.948s 00:11:55.105 user 0m5.653s 00:11:55.105 sys 0m0.736s 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.105 16:13:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.105 ************************************ 00:11:55.105 END TEST raid_read_error_test 00:11:55.105 ************************************ 00:11:55.105 16:13:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:55.105 16:13:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:55.105 16:13:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.105 16:13:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.105 ************************************ 00:11:55.105 START TEST raid_write_error_test 00:11:55.105 ************************************ 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.105 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kAoAVdBKeS 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73074 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73074 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73074 ']' 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.106 16:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.366 [2024-09-28 16:13:09.836304] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:55.366 [2024-09-28 16:13:09.836443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73074 ] 00:11:55.366 [2024-09-28 16:13:10.004769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.625 [2024-09-28 16:13:10.247339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.885 [2024-09-28 16:13:10.480367] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.885 [2024-09-28 16:13:10.480400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 BaseBdev1_malloc 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 true 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 [2024-09-28 16:13:10.717497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.146 [2024-09-28 16:13:10.717560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.146 [2024-09-28 16:13:10.717578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:56.146 [2024-09-28 16:13:10.717590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.146 [2024-09-28 16:13:10.719942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.146 [2024-09-28 16:13:10.719978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.146 BaseBdev1 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 BaseBdev2_malloc 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 true 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 [2024-09-28 16:13:10.800109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.146 [2024-09-28 16:13:10.800166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.146 [2024-09-28 16:13:10.800183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:56.146 [2024-09-28 16:13:10.800195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.146 [2024-09-28 16:13:10.802561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.146 [2024-09-28 16:13:10.802597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.146 BaseBdev2 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.146 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 BaseBdev3_malloc 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 true 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 [2024-09-28 16:13:10.873140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:56.406 [2024-09-28 16:13:10.873192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.406 [2024-09-28 16:13:10.873208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:56.406 [2024-09-28 16:13:10.873219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.406 [2024-09-28 16:13:10.875589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.406 [2024-09-28 16:13:10.875626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:56.406 BaseBdev3 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 BaseBdev4_malloc 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 true 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 [2024-09-28 16:13:10.946151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:56.406 [2024-09-28 16:13:10.946201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.406 [2024-09-28 16:13:10.946217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:56.406 [2024-09-28 16:13:10.946240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.406 [2024-09-28 16:13:10.948556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.406 [2024-09-28 16:13:10.948590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:56.406 BaseBdev4 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.406 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 [2024-09-28 16:13:10.958216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.406 [2024-09-28 16:13:10.960273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.406 [2024-09-28 16:13:10.960349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.406 [2024-09-28 16:13:10.960420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.406 [2024-09-28 16:13:10.960649] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:56.406 [2024-09-28 16:13:10.960671] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:56.407 [2024-09-28 16:13:10.960906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:56.407 [2024-09-28 16:13:10.961079] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:56.407 [2024-09-28 16:13:10.961091] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:56.407 [2024-09-28 16:13:10.961253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.407 16:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.407 16:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.407 "name": "raid_bdev1", 00:11:56.407 "uuid": "81d1b4a4-e942-4f22-9932-51c30a8abecb", 00:11:56.407 "strip_size_kb": 64, 00:11:56.407 "state": "online", 00:11:56.407 "raid_level": "concat", 00:11:56.407 "superblock": true, 00:11:56.407 "num_base_bdevs": 4, 00:11:56.407 "num_base_bdevs_discovered": 4, 00:11:56.407 "num_base_bdevs_operational": 4, 00:11:56.407 "base_bdevs_list": [ 00:11:56.407 { 00:11:56.407 "name": "BaseBdev1", 00:11:56.407 "uuid": "2349153e-b7d9-55af-af10-e7a97c149adb", 00:11:56.407 "is_configured": true, 00:11:56.407 "data_offset": 2048, 00:11:56.407 "data_size": 63488 00:11:56.407 }, 00:11:56.407 { 00:11:56.407 "name": "BaseBdev2", 00:11:56.407 "uuid": "1ef14de1-4c24-5f0b-9f83-9ad72faea7b7", 00:11:56.407 "is_configured": true, 00:11:56.407 "data_offset": 2048, 00:11:56.407 "data_size": 63488 00:11:56.407 }, 00:11:56.407 { 00:11:56.407 "name": "BaseBdev3", 00:11:56.407 "uuid": "139c4e8e-777b-55a7-bff2-d1ff0625dd7d", 00:11:56.407 "is_configured": true, 00:11:56.407 "data_offset": 2048, 00:11:56.407 "data_size": 63488 00:11:56.407 }, 00:11:56.407 { 00:11:56.407 "name": "BaseBdev4", 00:11:56.407 "uuid": "62f3d647-9f40-52ef-a1c5-2ef9254e6313", 00:11:56.407 "is_configured": true, 00:11:56.407 "data_offset": 2048, 00:11:56.407 "data_size": 63488 00:11:56.407 } 00:11:56.407 ] 00:11:56.407 }' 00:11:56.407 16:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.407 16:13:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.975 16:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.975 16:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:56.975 [2024-09-28 16:13:11.430811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.913 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.913 "name": "raid_bdev1", 00:11:57.913 "uuid": "81d1b4a4-e942-4f22-9932-51c30a8abecb", 00:11:57.913 "strip_size_kb": 64, 00:11:57.913 "state": "online", 00:11:57.913 "raid_level": "concat", 00:11:57.914 "superblock": true, 00:11:57.914 "num_base_bdevs": 4, 00:11:57.914 "num_base_bdevs_discovered": 4, 00:11:57.914 "num_base_bdevs_operational": 4, 00:11:57.914 "base_bdevs_list": [ 00:11:57.914 { 00:11:57.914 "name": "BaseBdev1", 00:11:57.914 "uuid": "2349153e-b7d9-55af-af10-e7a97c149adb", 00:11:57.914 "is_configured": true, 00:11:57.914 "data_offset": 2048, 00:11:57.914 "data_size": 63488 00:11:57.914 }, 00:11:57.914 { 00:11:57.914 "name": "BaseBdev2", 00:11:57.914 "uuid": "1ef14de1-4c24-5f0b-9f83-9ad72faea7b7", 00:11:57.914 "is_configured": true, 00:11:57.914 "data_offset": 2048, 00:11:57.914 "data_size": 63488 00:11:57.914 }, 00:11:57.914 { 00:11:57.914 "name": "BaseBdev3", 00:11:57.914 "uuid": "139c4e8e-777b-55a7-bff2-d1ff0625dd7d", 00:11:57.914 "is_configured": true, 00:11:57.914 "data_offset": 2048, 00:11:57.914 "data_size": 63488 00:11:57.914 }, 00:11:57.914 { 00:11:57.914 "name": "BaseBdev4", 00:11:57.914 "uuid": "62f3d647-9f40-52ef-a1c5-2ef9254e6313", 00:11:57.914 "is_configured": true, 00:11:57.914 "data_offset": 2048, 00:11:57.914 "data_size": 63488 00:11:57.914 } 00:11:57.914 ] 00:11:57.914 }' 00:11:57.914 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.914 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.173 [2024-09-28 16:13:12.803501] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.173 [2024-09-28 16:13:12.803546] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.173 [2024-09-28 16:13:12.806128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.173 [2024-09-28 16:13:12.806199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.173 [2024-09-28 16:13:12.806257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.173 [2024-09-28 16:13:12.806270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:58.173 { 00:11:58.173 "results": [ 00:11:58.173 { 00:11:58.173 "job": "raid_bdev1", 00:11:58.173 "core_mask": "0x1", 00:11:58.173 "workload": "randrw", 00:11:58.173 "percentage": 50, 00:11:58.173 "status": "finished", 00:11:58.173 "queue_depth": 1, 00:11:58.173 "io_size": 131072, 00:11:58.173 "runtime": 1.373297, 00:11:58.173 "iops": 14187.753996404273, 00:11:58.173 "mibps": 1773.4692495505342, 00:11:58.173 "io_failed": 1, 00:11:58.173 "io_timeout": 0, 00:11:58.173 "avg_latency_us": 99.44019659059202, 00:11:58.173 "min_latency_us": 24.370305676855896, 00:11:58.173 "max_latency_us": 1438.071615720524 00:11:58.173 } 00:11:58.173 ], 00:11:58.173 "core_count": 1 00:11:58.173 } 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73074 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73074 ']' 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73074 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73074 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:58.173 killing process with pid 73074 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73074' 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73074 00:11:58.173 [2024-09-28 16:13:12.852170] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.173 16:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73074 00:11:58.743 [2024-09-28 16:13:13.189581] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kAoAVdBKeS 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:00.125 00:12:00.125 real 0m4.866s 00:12:00.125 user 0m5.472s 00:12:00.125 sys 0m0.728s 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.125 16:13:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 ************************************ 00:12:00.125 END TEST raid_write_error_test 00:12:00.125 ************************************ 00:12:00.125 16:13:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:00.125 16:13:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:00.125 16:13:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:00.125 16:13:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.125 16:13:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 ************************************ 00:12:00.125 START TEST raid_state_function_test 00:12:00.125 ************************************ 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73222 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.125 Process raid pid: 73222 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73222' 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73222 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73222 ']' 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.125 16:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 [2024-09-28 16:13:14.764640] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:00.125 [2024-09-28 16:13:14.764775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.385 [2024-09-28 16:13:14.932975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.644 [2024-09-28 16:13:15.175107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.903 [2024-09-28 16:13:15.398528] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.903 [2024-09-28 16:13:15.398574] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.903 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.903 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:00.903 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.903 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.903 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.903 [2024-09-28 16:13:15.585487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.903 [2024-09-28 16:13:15.585542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.903 [2024-09-28 16:13:15.585552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.903 [2024-09-28 16:13:15.585562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.903 [2024-09-28 16:13:15.585568] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.903 [2024-09-28 16:13:15.585577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.903 [2024-09-28 16:13:15.585583] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.903 [2024-09-28 16:13:15.585592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.163 "name": "Existed_Raid", 00:12:01.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.163 "strip_size_kb": 0, 00:12:01.163 "state": "configuring", 00:12:01.163 "raid_level": "raid1", 00:12:01.163 "superblock": false, 00:12:01.163 "num_base_bdevs": 4, 00:12:01.163 "num_base_bdevs_discovered": 0, 00:12:01.163 "num_base_bdevs_operational": 4, 00:12:01.163 "base_bdevs_list": [ 00:12:01.163 { 00:12:01.163 "name": "BaseBdev1", 00:12:01.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.163 "is_configured": false, 00:12:01.163 "data_offset": 0, 00:12:01.163 "data_size": 0 00:12:01.163 }, 00:12:01.163 { 00:12:01.163 "name": "BaseBdev2", 00:12:01.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.163 "is_configured": false, 00:12:01.163 "data_offset": 0, 00:12:01.163 "data_size": 0 00:12:01.163 }, 00:12:01.163 { 00:12:01.163 "name": "BaseBdev3", 00:12:01.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.163 "is_configured": false, 00:12:01.163 "data_offset": 0, 00:12:01.163 "data_size": 0 00:12:01.163 }, 00:12:01.163 { 00:12:01.163 "name": "BaseBdev4", 00:12:01.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.163 "is_configured": false, 00:12:01.163 "data_offset": 0, 00:12:01.163 "data_size": 0 00:12:01.163 } 00:12:01.163 ] 00:12:01.163 }' 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.163 16:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 [2024-09-28 16:13:16.036619] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.422 [2024-09-28 16:13:16.036668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 [2024-09-28 16:13:16.048620] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.422 [2024-09-28 16:13:16.048659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.422 [2024-09-28 16:13:16.048668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.422 [2024-09-28 16:13:16.048677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.422 [2024-09-28 16:13:16.048683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.422 [2024-09-28 16:13:16.048692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.422 [2024-09-28 16:13:16.048698] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.422 [2024-09-28 16:13:16.048707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.422 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.681 [2024-09-28 16:13:16.114669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.681 BaseBdev1 00:12:01.681 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.681 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:01.681 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:01.681 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:01.681 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 [ 00:12:01.682 { 00:12:01.682 "name": "BaseBdev1", 00:12:01.682 "aliases": [ 00:12:01.682 "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b" 00:12:01.682 ], 00:12:01.682 "product_name": "Malloc disk", 00:12:01.682 "block_size": 512, 00:12:01.682 "num_blocks": 65536, 00:12:01.682 "uuid": "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b", 00:12:01.682 "assigned_rate_limits": { 00:12:01.682 "rw_ios_per_sec": 0, 00:12:01.682 "rw_mbytes_per_sec": 0, 00:12:01.682 "r_mbytes_per_sec": 0, 00:12:01.682 "w_mbytes_per_sec": 0 00:12:01.682 }, 00:12:01.682 "claimed": true, 00:12:01.682 "claim_type": "exclusive_write", 00:12:01.682 "zoned": false, 00:12:01.682 "supported_io_types": { 00:12:01.682 "read": true, 00:12:01.682 "write": true, 00:12:01.682 "unmap": true, 00:12:01.682 "flush": true, 00:12:01.682 "reset": true, 00:12:01.682 "nvme_admin": false, 00:12:01.682 "nvme_io": false, 00:12:01.682 "nvme_io_md": false, 00:12:01.682 "write_zeroes": true, 00:12:01.682 "zcopy": true, 00:12:01.682 "get_zone_info": false, 00:12:01.682 "zone_management": false, 00:12:01.682 "zone_append": false, 00:12:01.682 "compare": false, 00:12:01.682 "compare_and_write": false, 00:12:01.682 "abort": true, 00:12:01.682 "seek_hole": false, 00:12:01.682 "seek_data": false, 00:12:01.682 "copy": true, 00:12:01.682 "nvme_iov_md": false 00:12:01.682 }, 00:12:01.682 "memory_domains": [ 00:12:01.682 { 00:12:01.682 "dma_device_id": "system", 00:12:01.682 "dma_device_type": 1 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.682 "dma_device_type": 2 00:12:01.682 } 00:12:01.682 ], 00:12:01.682 "driver_specific": {} 00:12:01.682 } 00:12:01.682 ] 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.682 "name": "Existed_Raid", 00:12:01.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.682 "strip_size_kb": 0, 00:12:01.682 "state": "configuring", 00:12:01.682 "raid_level": "raid1", 00:12:01.682 "superblock": false, 00:12:01.682 "num_base_bdevs": 4, 00:12:01.682 "num_base_bdevs_discovered": 1, 00:12:01.682 "num_base_bdevs_operational": 4, 00:12:01.682 "base_bdevs_list": [ 00:12:01.682 { 00:12:01.682 "name": "BaseBdev1", 00:12:01.682 "uuid": "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b", 00:12:01.682 "is_configured": true, 00:12:01.682 "data_offset": 0, 00:12:01.682 "data_size": 65536 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "name": "BaseBdev2", 00:12:01.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.682 "is_configured": false, 00:12:01.682 "data_offset": 0, 00:12:01.682 "data_size": 0 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "name": "BaseBdev3", 00:12:01.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.682 "is_configured": false, 00:12:01.682 "data_offset": 0, 00:12:01.682 "data_size": 0 00:12:01.682 }, 00:12:01.682 { 00:12:01.682 "name": "BaseBdev4", 00:12:01.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.682 "is_configured": false, 00:12:01.682 "data_offset": 0, 00:12:01.682 "data_size": 0 00:12:01.682 } 00:12:01.682 ] 00:12:01.682 }' 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.682 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 [2024-09-28 16:13:16.549919] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.942 [2024-09-28 16:13:16.549980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 [2024-09-28 16:13:16.561957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.942 [2024-09-28 16:13:16.564120] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.942 [2024-09-28 16:13:16.564163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.942 [2024-09-28 16:13:16.564174] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.942 [2024-09-28 16:13:16.564184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.942 [2024-09-28 16:13:16.564191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.942 [2024-09-28 16:13:16.564200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.942 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.942 "name": "Existed_Raid", 00:12:01.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.942 "strip_size_kb": 0, 00:12:01.942 "state": "configuring", 00:12:01.942 "raid_level": "raid1", 00:12:01.942 "superblock": false, 00:12:01.942 "num_base_bdevs": 4, 00:12:01.942 "num_base_bdevs_discovered": 1, 00:12:01.942 "num_base_bdevs_operational": 4, 00:12:01.942 "base_bdevs_list": [ 00:12:01.942 { 00:12:01.942 "name": "BaseBdev1", 00:12:01.942 "uuid": "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b", 00:12:01.942 "is_configured": true, 00:12:01.942 "data_offset": 0, 00:12:01.942 "data_size": 65536 00:12:01.942 }, 00:12:01.942 { 00:12:01.942 "name": "BaseBdev2", 00:12:01.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.942 "is_configured": false, 00:12:01.942 "data_offset": 0, 00:12:01.942 "data_size": 0 00:12:01.942 }, 00:12:01.942 { 00:12:01.943 "name": "BaseBdev3", 00:12:01.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.943 "is_configured": false, 00:12:01.943 "data_offset": 0, 00:12:01.943 "data_size": 0 00:12:01.943 }, 00:12:01.943 { 00:12:01.943 "name": "BaseBdev4", 00:12:01.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.943 "is_configured": false, 00:12:01.943 "data_offset": 0, 00:12:01.943 "data_size": 0 00:12:01.943 } 00:12:01.943 ] 00:12:01.943 }' 00:12:01.943 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.943 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.512 16:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.512 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.512 16:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.512 [2024-09-28 16:13:17.000933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.512 BaseBdev2 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.512 [ 00:12:02.512 { 00:12:02.512 "name": "BaseBdev2", 00:12:02.512 "aliases": [ 00:12:02.512 "eead3859-a164-4fbd-8a36-74596cf878f6" 00:12:02.512 ], 00:12:02.512 "product_name": "Malloc disk", 00:12:02.512 "block_size": 512, 00:12:02.512 "num_blocks": 65536, 00:12:02.512 "uuid": "eead3859-a164-4fbd-8a36-74596cf878f6", 00:12:02.512 "assigned_rate_limits": { 00:12:02.512 "rw_ios_per_sec": 0, 00:12:02.512 "rw_mbytes_per_sec": 0, 00:12:02.512 "r_mbytes_per_sec": 0, 00:12:02.512 "w_mbytes_per_sec": 0 00:12:02.512 }, 00:12:02.512 "claimed": true, 00:12:02.512 "claim_type": "exclusive_write", 00:12:02.512 "zoned": false, 00:12:02.512 "supported_io_types": { 00:12:02.512 "read": true, 00:12:02.512 "write": true, 00:12:02.512 "unmap": true, 00:12:02.512 "flush": true, 00:12:02.512 "reset": true, 00:12:02.512 "nvme_admin": false, 00:12:02.512 "nvme_io": false, 00:12:02.512 "nvme_io_md": false, 00:12:02.512 "write_zeroes": true, 00:12:02.512 "zcopy": true, 00:12:02.512 "get_zone_info": false, 00:12:02.512 "zone_management": false, 00:12:02.512 "zone_append": false, 00:12:02.512 "compare": false, 00:12:02.512 "compare_and_write": false, 00:12:02.512 "abort": true, 00:12:02.512 "seek_hole": false, 00:12:02.512 "seek_data": false, 00:12:02.512 "copy": true, 00:12:02.512 "nvme_iov_md": false 00:12:02.512 }, 00:12:02.512 "memory_domains": [ 00:12:02.512 { 00:12:02.512 "dma_device_id": "system", 00:12:02.512 "dma_device_type": 1 00:12:02.512 }, 00:12:02.512 { 00:12:02.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.512 "dma_device_type": 2 00:12:02.512 } 00:12:02.512 ], 00:12:02.512 "driver_specific": {} 00:12:02.512 } 00:12:02.512 ] 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:02.512 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.513 "name": "Existed_Raid", 00:12:02.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.513 "strip_size_kb": 0, 00:12:02.513 "state": "configuring", 00:12:02.513 "raid_level": "raid1", 00:12:02.513 "superblock": false, 00:12:02.513 "num_base_bdevs": 4, 00:12:02.513 "num_base_bdevs_discovered": 2, 00:12:02.513 "num_base_bdevs_operational": 4, 00:12:02.513 "base_bdevs_list": [ 00:12:02.513 { 00:12:02.513 "name": "BaseBdev1", 00:12:02.513 "uuid": "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b", 00:12:02.513 "is_configured": true, 00:12:02.513 "data_offset": 0, 00:12:02.513 "data_size": 65536 00:12:02.513 }, 00:12:02.513 { 00:12:02.513 "name": "BaseBdev2", 00:12:02.513 "uuid": "eead3859-a164-4fbd-8a36-74596cf878f6", 00:12:02.513 "is_configured": true, 00:12:02.513 "data_offset": 0, 00:12:02.513 "data_size": 65536 00:12:02.513 }, 00:12:02.513 { 00:12:02.513 "name": "BaseBdev3", 00:12:02.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.513 "is_configured": false, 00:12:02.513 "data_offset": 0, 00:12:02.513 "data_size": 0 00:12:02.513 }, 00:12:02.513 { 00:12:02.513 "name": "BaseBdev4", 00:12:02.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.513 "is_configured": false, 00:12:02.513 "data_offset": 0, 00:12:02.513 "data_size": 0 00:12:02.513 } 00:12:02.513 ] 00:12:02.513 }' 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.513 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.081 [2024-09-28 16:13:17.516110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.081 BaseBdev3 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.081 [ 00:12:03.081 { 00:12:03.081 "name": "BaseBdev3", 00:12:03.081 "aliases": [ 00:12:03.081 "e902e3d6-4365-40e1-8466-8fd8f28afe3d" 00:12:03.081 ], 00:12:03.081 "product_name": "Malloc disk", 00:12:03.081 "block_size": 512, 00:12:03.081 "num_blocks": 65536, 00:12:03.081 "uuid": "e902e3d6-4365-40e1-8466-8fd8f28afe3d", 00:12:03.081 "assigned_rate_limits": { 00:12:03.081 "rw_ios_per_sec": 0, 00:12:03.081 "rw_mbytes_per_sec": 0, 00:12:03.081 "r_mbytes_per_sec": 0, 00:12:03.081 "w_mbytes_per_sec": 0 00:12:03.081 }, 00:12:03.081 "claimed": true, 00:12:03.081 "claim_type": "exclusive_write", 00:12:03.081 "zoned": false, 00:12:03.081 "supported_io_types": { 00:12:03.081 "read": true, 00:12:03.081 "write": true, 00:12:03.081 "unmap": true, 00:12:03.081 "flush": true, 00:12:03.081 "reset": true, 00:12:03.081 "nvme_admin": false, 00:12:03.081 "nvme_io": false, 00:12:03.081 "nvme_io_md": false, 00:12:03.081 "write_zeroes": true, 00:12:03.081 "zcopy": true, 00:12:03.081 "get_zone_info": false, 00:12:03.081 "zone_management": false, 00:12:03.081 "zone_append": false, 00:12:03.081 "compare": false, 00:12:03.081 "compare_and_write": false, 00:12:03.081 "abort": true, 00:12:03.081 "seek_hole": false, 00:12:03.081 "seek_data": false, 00:12:03.081 "copy": true, 00:12:03.081 "nvme_iov_md": false 00:12:03.081 }, 00:12:03.081 "memory_domains": [ 00:12:03.081 { 00:12:03.081 "dma_device_id": "system", 00:12:03.081 "dma_device_type": 1 00:12:03.081 }, 00:12:03.081 { 00:12:03.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.081 "dma_device_type": 2 00:12:03.081 } 00:12:03.081 ], 00:12:03.081 "driver_specific": {} 00:12:03.081 } 00:12:03.081 ] 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.081 "name": "Existed_Raid", 00:12:03.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.081 "strip_size_kb": 0, 00:12:03.081 "state": "configuring", 00:12:03.081 "raid_level": "raid1", 00:12:03.081 "superblock": false, 00:12:03.081 "num_base_bdevs": 4, 00:12:03.081 "num_base_bdevs_discovered": 3, 00:12:03.081 "num_base_bdevs_operational": 4, 00:12:03.081 "base_bdevs_list": [ 00:12:03.081 { 00:12:03.081 "name": "BaseBdev1", 00:12:03.081 "uuid": "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b", 00:12:03.081 "is_configured": true, 00:12:03.081 "data_offset": 0, 00:12:03.081 "data_size": 65536 00:12:03.081 }, 00:12:03.081 { 00:12:03.081 "name": "BaseBdev2", 00:12:03.081 "uuid": "eead3859-a164-4fbd-8a36-74596cf878f6", 00:12:03.081 "is_configured": true, 00:12:03.081 "data_offset": 0, 00:12:03.081 "data_size": 65536 00:12:03.081 }, 00:12:03.081 { 00:12:03.081 "name": "BaseBdev3", 00:12:03.081 "uuid": "e902e3d6-4365-40e1-8466-8fd8f28afe3d", 00:12:03.081 "is_configured": true, 00:12:03.081 "data_offset": 0, 00:12:03.081 "data_size": 65536 00:12:03.081 }, 00:12:03.081 { 00:12:03.081 "name": "BaseBdev4", 00:12:03.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.081 "is_configured": false, 00:12:03.081 "data_offset": 0, 00:12:03.081 "data_size": 0 00:12:03.081 } 00:12:03.081 ] 00:12:03.081 }' 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.081 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.340 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:03.340 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.340 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.340 [2024-09-28 16:13:17.998863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.340 [2024-09-28 16:13:17.998988] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.340 [2024-09-28 16:13:17.999007] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.340 [2024-09-28 16:13:17.999379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.340 [2024-09-28 16:13:17.999586] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.340 [2024-09-28 16:13:17.999600] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:03.340 [2024-09-28 16:13:17.999883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.340 BaseBdev4 00:12:03.340 16:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.340 16:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.340 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.340 [ 00:12:03.340 { 00:12:03.340 "name": "BaseBdev4", 00:12:03.598 "aliases": [ 00:12:03.598 "c1d3caff-10b5-40f2-a18f-078c20d24b45" 00:12:03.598 ], 00:12:03.598 "product_name": "Malloc disk", 00:12:03.598 "block_size": 512, 00:12:03.598 "num_blocks": 65536, 00:12:03.598 "uuid": "c1d3caff-10b5-40f2-a18f-078c20d24b45", 00:12:03.598 "assigned_rate_limits": { 00:12:03.598 "rw_ios_per_sec": 0, 00:12:03.598 "rw_mbytes_per_sec": 0, 00:12:03.598 "r_mbytes_per_sec": 0, 00:12:03.598 "w_mbytes_per_sec": 0 00:12:03.598 }, 00:12:03.598 "claimed": true, 00:12:03.598 "claim_type": "exclusive_write", 00:12:03.598 "zoned": false, 00:12:03.598 "supported_io_types": { 00:12:03.598 "read": true, 00:12:03.598 "write": true, 00:12:03.598 "unmap": true, 00:12:03.598 "flush": true, 00:12:03.598 "reset": true, 00:12:03.598 "nvme_admin": false, 00:12:03.598 "nvme_io": false, 00:12:03.598 "nvme_io_md": false, 00:12:03.598 "write_zeroes": true, 00:12:03.598 "zcopy": true, 00:12:03.598 "get_zone_info": false, 00:12:03.598 "zone_management": false, 00:12:03.598 "zone_append": false, 00:12:03.598 "compare": false, 00:12:03.598 "compare_and_write": false, 00:12:03.598 "abort": true, 00:12:03.598 "seek_hole": false, 00:12:03.598 "seek_data": false, 00:12:03.598 "copy": true, 00:12:03.598 "nvme_iov_md": false 00:12:03.598 }, 00:12:03.598 "memory_domains": [ 00:12:03.598 { 00:12:03.598 "dma_device_id": "system", 00:12:03.598 "dma_device_type": 1 00:12:03.598 }, 00:12:03.598 { 00:12:03.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.598 "dma_device_type": 2 00:12:03.598 } 00:12:03.598 ], 00:12:03.598 "driver_specific": {} 00:12:03.598 } 00:12:03.598 ] 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.598 "name": "Existed_Raid", 00:12:03.598 "uuid": "95170b86-daaa-4c9c-872c-7e82467eb61c", 00:12:03.598 "strip_size_kb": 0, 00:12:03.598 "state": "online", 00:12:03.598 "raid_level": "raid1", 00:12:03.598 "superblock": false, 00:12:03.598 "num_base_bdevs": 4, 00:12:03.598 "num_base_bdevs_discovered": 4, 00:12:03.598 "num_base_bdevs_operational": 4, 00:12:03.598 "base_bdevs_list": [ 00:12:03.598 { 00:12:03.598 "name": "BaseBdev1", 00:12:03.598 "uuid": "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b", 00:12:03.598 "is_configured": true, 00:12:03.598 "data_offset": 0, 00:12:03.598 "data_size": 65536 00:12:03.598 }, 00:12:03.598 { 00:12:03.598 "name": "BaseBdev2", 00:12:03.598 "uuid": "eead3859-a164-4fbd-8a36-74596cf878f6", 00:12:03.598 "is_configured": true, 00:12:03.598 "data_offset": 0, 00:12:03.598 "data_size": 65536 00:12:03.598 }, 00:12:03.598 { 00:12:03.598 "name": "BaseBdev3", 00:12:03.598 "uuid": "e902e3d6-4365-40e1-8466-8fd8f28afe3d", 00:12:03.598 "is_configured": true, 00:12:03.598 "data_offset": 0, 00:12:03.598 "data_size": 65536 00:12:03.598 }, 00:12:03.598 { 00:12:03.598 "name": "BaseBdev4", 00:12:03.598 "uuid": "c1d3caff-10b5-40f2-a18f-078c20d24b45", 00:12:03.598 "is_configured": true, 00:12:03.598 "data_offset": 0, 00:12:03.598 "data_size": 65536 00:12:03.598 } 00:12:03.598 ] 00:12:03.598 }' 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.598 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.856 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.856 [2024-09-28 16:13:18.486408] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.857 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.857 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.857 "name": "Existed_Raid", 00:12:03.857 "aliases": [ 00:12:03.857 "95170b86-daaa-4c9c-872c-7e82467eb61c" 00:12:03.857 ], 00:12:03.857 "product_name": "Raid Volume", 00:12:03.857 "block_size": 512, 00:12:03.857 "num_blocks": 65536, 00:12:03.857 "uuid": "95170b86-daaa-4c9c-872c-7e82467eb61c", 00:12:03.857 "assigned_rate_limits": { 00:12:03.857 "rw_ios_per_sec": 0, 00:12:03.857 "rw_mbytes_per_sec": 0, 00:12:03.857 "r_mbytes_per_sec": 0, 00:12:03.857 "w_mbytes_per_sec": 0 00:12:03.857 }, 00:12:03.857 "claimed": false, 00:12:03.857 "zoned": false, 00:12:03.857 "supported_io_types": { 00:12:03.857 "read": true, 00:12:03.857 "write": true, 00:12:03.857 "unmap": false, 00:12:03.857 "flush": false, 00:12:03.857 "reset": true, 00:12:03.857 "nvme_admin": false, 00:12:03.857 "nvme_io": false, 00:12:03.857 "nvme_io_md": false, 00:12:03.857 "write_zeroes": true, 00:12:03.857 "zcopy": false, 00:12:03.857 "get_zone_info": false, 00:12:03.857 "zone_management": false, 00:12:03.857 "zone_append": false, 00:12:03.857 "compare": false, 00:12:03.857 "compare_and_write": false, 00:12:03.857 "abort": false, 00:12:03.857 "seek_hole": false, 00:12:03.857 "seek_data": false, 00:12:03.857 "copy": false, 00:12:03.857 "nvme_iov_md": false 00:12:03.857 }, 00:12:03.857 "memory_domains": [ 00:12:03.857 { 00:12:03.857 "dma_device_id": "system", 00:12:03.857 "dma_device_type": 1 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.857 "dma_device_type": 2 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "dma_device_id": "system", 00:12:03.857 "dma_device_type": 1 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.857 "dma_device_type": 2 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "dma_device_id": "system", 00:12:03.857 "dma_device_type": 1 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.857 "dma_device_type": 2 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "dma_device_id": "system", 00:12:03.857 "dma_device_type": 1 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.857 "dma_device_type": 2 00:12:03.857 } 00:12:03.857 ], 00:12:03.857 "driver_specific": { 00:12:03.857 "raid": { 00:12:03.857 "uuid": "95170b86-daaa-4c9c-872c-7e82467eb61c", 00:12:03.857 "strip_size_kb": 0, 00:12:03.857 "state": "online", 00:12:03.857 "raid_level": "raid1", 00:12:03.857 "superblock": false, 00:12:03.857 "num_base_bdevs": 4, 00:12:03.857 "num_base_bdevs_discovered": 4, 00:12:03.857 "num_base_bdevs_operational": 4, 00:12:03.857 "base_bdevs_list": [ 00:12:03.857 { 00:12:03.857 "name": "BaseBdev1", 00:12:03.857 "uuid": "03139a54-78cf-4f8a-a8e0-4fca4fa11d0b", 00:12:03.857 "is_configured": true, 00:12:03.857 "data_offset": 0, 00:12:03.857 "data_size": 65536 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "name": "BaseBdev2", 00:12:03.857 "uuid": "eead3859-a164-4fbd-8a36-74596cf878f6", 00:12:03.857 "is_configured": true, 00:12:03.857 "data_offset": 0, 00:12:03.857 "data_size": 65536 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "name": "BaseBdev3", 00:12:03.857 "uuid": "e902e3d6-4365-40e1-8466-8fd8f28afe3d", 00:12:03.857 "is_configured": true, 00:12:03.857 "data_offset": 0, 00:12:03.857 "data_size": 65536 00:12:03.857 }, 00:12:03.857 { 00:12:03.857 "name": "BaseBdev4", 00:12:03.857 "uuid": "c1d3caff-10b5-40f2-a18f-078c20d24b45", 00:12:03.857 "is_configured": true, 00:12:03.857 "data_offset": 0, 00:12:03.857 "data_size": 65536 00:12:03.857 } 00:12:03.857 ] 00:12:03.857 } 00:12:03.857 } 00:12:03.857 }' 00:12:03.857 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:04.115 BaseBdev2 00:12:04.115 BaseBdev3 00:12:04.115 BaseBdev4' 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.115 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.116 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.375 [2024-09-28 16:13:18.801573] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.375 "name": "Existed_Raid", 00:12:04.375 "uuid": "95170b86-daaa-4c9c-872c-7e82467eb61c", 00:12:04.375 "strip_size_kb": 0, 00:12:04.375 "state": "online", 00:12:04.375 "raid_level": "raid1", 00:12:04.375 "superblock": false, 00:12:04.375 "num_base_bdevs": 4, 00:12:04.375 "num_base_bdevs_discovered": 3, 00:12:04.375 "num_base_bdevs_operational": 3, 00:12:04.375 "base_bdevs_list": [ 00:12:04.375 { 00:12:04.375 "name": null, 00:12:04.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.375 "is_configured": false, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "name": "BaseBdev2", 00:12:04.375 "uuid": "eead3859-a164-4fbd-8a36-74596cf878f6", 00:12:04.375 "is_configured": true, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "name": "BaseBdev3", 00:12:04.375 "uuid": "e902e3d6-4365-40e1-8466-8fd8f28afe3d", 00:12:04.375 "is_configured": true, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "name": "BaseBdev4", 00:12:04.375 "uuid": "c1d3caff-10b5-40f2-a18f-078c20d24b45", 00:12:04.375 "is_configured": true, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 } 00:12:04.375 ] 00:12:04.375 }' 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.375 16:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.942 [2024-09-28 16:13:19.434969] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.942 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.942 [2024-09-28 16:13:19.597653] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.201 [2024-09-28 16:13:19.761387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:05.201 [2024-09-28 16:13:19.761499] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.201 [2024-09-28 16:13:19.863818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.201 [2024-09-28 16:13:19.863966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.201 [2024-09-28 16:13:19.864011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.201 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.461 BaseBdev2 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.461 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 [ 00:12:05.462 { 00:12:05.462 "name": "BaseBdev2", 00:12:05.462 "aliases": [ 00:12:05.462 "bf1c9cb0-b51e-4f02-abd0-d67a748e4910" 00:12:05.462 ], 00:12:05.462 "product_name": "Malloc disk", 00:12:05.462 "block_size": 512, 00:12:05.462 "num_blocks": 65536, 00:12:05.462 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:05.462 "assigned_rate_limits": { 00:12:05.462 "rw_ios_per_sec": 0, 00:12:05.462 "rw_mbytes_per_sec": 0, 00:12:05.462 "r_mbytes_per_sec": 0, 00:12:05.462 "w_mbytes_per_sec": 0 00:12:05.462 }, 00:12:05.462 "claimed": false, 00:12:05.462 "zoned": false, 00:12:05.462 "supported_io_types": { 00:12:05.462 "read": true, 00:12:05.462 "write": true, 00:12:05.462 "unmap": true, 00:12:05.462 "flush": true, 00:12:05.462 "reset": true, 00:12:05.462 "nvme_admin": false, 00:12:05.462 "nvme_io": false, 00:12:05.462 "nvme_io_md": false, 00:12:05.462 "write_zeroes": true, 00:12:05.462 "zcopy": true, 00:12:05.462 "get_zone_info": false, 00:12:05.462 "zone_management": false, 00:12:05.462 "zone_append": false, 00:12:05.462 "compare": false, 00:12:05.462 "compare_and_write": false, 00:12:05.462 "abort": true, 00:12:05.462 "seek_hole": false, 00:12:05.462 "seek_data": false, 00:12:05.462 "copy": true, 00:12:05.462 "nvme_iov_md": false 00:12:05.462 }, 00:12:05.462 "memory_domains": [ 00:12:05.462 { 00:12:05.462 "dma_device_id": "system", 00:12:05.462 "dma_device_type": 1 00:12:05.462 }, 00:12:05.462 { 00:12:05.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.462 "dma_device_type": 2 00:12:05.462 } 00:12:05.462 ], 00:12:05.462 "driver_specific": {} 00:12:05.462 } 00:12:05.462 ] 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 BaseBdev3 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 [ 00:12:05.462 { 00:12:05.462 "name": "BaseBdev3", 00:12:05.462 "aliases": [ 00:12:05.462 "d42e0e9d-6022-4143-81bd-5eca4f02ceda" 00:12:05.462 ], 00:12:05.462 "product_name": "Malloc disk", 00:12:05.462 "block_size": 512, 00:12:05.462 "num_blocks": 65536, 00:12:05.462 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:05.462 "assigned_rate_limits": { 00:12:05.462 "rw_ios_per_sec": 0, 00:12:05.462 "rw_mbytes_per_sec": 0, 00:12:05.462 "r_mbytes_per_sec": 0, 00:12:05.462 "w_mbytes_per_sec": 0 00:12:05.462 }, 00:12:05.462 "claimed": false, 00:12:05.462 "zoned": false, 00:12:05.462 "supported_io_types": { 00:12:05.462 "read": true, 00:12:05.462 "write": true, 00:12:05.462 "unmap": true, 00:12:05.462 "flush": true, 00:12:05.462 "reset": true, 00:12:05.462 "nvme_admin": false, 00:12:05.462 "nvme_io": false, 00:12:05.462 "nvme_io_md": false, 00:12:05.462 "write_zeroes": true, 00:12:05.462 "zcopy": true, 00:12:05.462 "get_zone_info": false, 00:12:05.462 "zone_management": false, 00:12:05.462 "zone_append": false, 00:12:05.462 "compare": false, 00:12:05.462 "compare_and_write": false, 00:12:05.462 "abort": true, 00:12:05.462 "seek_hole": false, 00:12:05.462 "seek_data": false, 00:12:05.462 "copy": true, 00:12:05.462 "nvme_iov_md": false 00:12:05.462 }, 00:12:05.462 "memory_domains": [ 00:12:05.462 { 00:12:05.462 "dma_device_id": "system", 00:12:05.462 "dma_device_type": 1 00:12:05.462 }, 00:12:05.462 { 00:12:05.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.462 "dma_device_type": 2 00:12:05.462 } 00:12:05.462 ], 00:12:05.462 "driver_specific": {} 00:12:05.462 } 00:12:05.462 ] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 BaseBdev4 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.722 [ 00:12:05.722 { 00:12:05.722 "name": "BaseBdev4", 00:12:05.722 "aliases": [ 00:12:05.722 "7e1b8470-ae00-4dc9-87af-d8f2627c59ac" 00:12:05.722 ], 00:12:05.722 "product_name": "Malloc disk", 00:12:05.722 "block_size": 512, 00:12:05.722 "num_blocks": 65536, 00:12:05.722 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:05.722 "assigned_rate_limits": { 00:12:05.722 "rw_ios_per_sec": 0, 00:12:05.722 "rw_mbytes_per_sec": 0, 00:12:05.722 "r_mbytes_per_sec": 0, 00:12:05.722 "w_mbytes_per_sec": 0 00:12:05.722 }, 00:12:05.722 "claimed": false, 00:12:05.722 "zoned": false, 00:12:05.722 "supported_io_types": { 00:12:05.722 "read": true, 00:12:05.722 "write": true, 00:12:05.722 "unmap": true, 00:12:05.722 "flush": true, 00:12:05.722 "reset": true, 00:12:05.722 "nvme_admin": false, 00:12:05.722 "nvme_io": false, 00:12:05.722 "nvme_io_md": false, 00:12:05.722 "write_zeroes": true, 00:12:05.722 "zcopy": true, 00:12:05.722 "get_zone_info": false, 00:12:05.722 "zone_management": false, 00:12:05.722 "zone_append": false, 00:12:05.722 "compare": false, 00:12:05.722 "compare_and_write": false, 00:12:05.722 "abort": true, 00:12:05.722 "seek_hole": false, 00:12:05.722 "seek_data": false, 00:12:05.722 "copy": true, 00:12:05.722 "nvme_iov_md": false 00:12:05.722 }, 00:12:05.722 "memory_domains": [ 00:12:05.722 { 00:12:05.722 "dma_device_id": "system", 00:12:05.722 "dma_device_type": 1 00:12:05.722 }, 00:12:05.722 { 00:12:05.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.722 "dma_device_type": 2 00:12:05.722 } 00:12:05.722 ], 00:12:05.722 "driver_specific": {} 00:12:05.722 } 00:12:05.722 ] 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.722 [2024-09-28 16:13:20.173081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.722 [2024-09-28 16:13:20.173190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.722 [2024-09-28 16:13:20.173239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.722 [2024-09-28 16:13:20.175291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.722 [2024-09-28 16:13:20.175381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.722 "name": "Existed_Raid", 00:12:05.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.722 "strip_size_kb": 0, 00:12:05.722 "state": "configuring", 00:12:05.722 "raid_level": "raid1", 00:12:05.722 "superblock": false, 00:12:05.722 "num_base_bdevs": 4, 00:12:05.722 "num_base_bdevs_discovered": 3, 00:12:05.722 "num_base_bdevs_operational": 4, 00:12:05.722 "base_bdevs_list": [ 00:12:05.722 { 00:12:05.722 "name": "BaseBdev1", 00:12:05.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.722 "is_configured": false, 00:12:05.722 "data_offset": 0, 00:12:05.722 "data_size": 0 00:12:05.722 }, 00:12:05.722 { 00:12:05.722 "name": "BaseBdev2", 00:12:05.722 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:05.722 "is_configured": true, 00:12:05.722 "data_offset": 0, 00:12:05.722 "data_size": 65536 00:12:05.722 }, 00:12:05.722 { 00:12:05.722 "name": "BaseBdev3", 00:12:05.722 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:05.722 "is_configured": true, 00:12:05.722 "data_offset": 0, 00:12:05.722 "data_size": 65536 00:12:05.722 }, 00:12:05.722 { 00:12:05.722 "name": "BaseBdev4", 00:12:05.722 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:05.722 "is_configured": true, 00:12:05.722 "data_offset": 0, 00:12:05.722 "data_size": 65536 00:12:05.722 } 00:12:05.722 ] 00:12:05.722 }' 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.722 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.981 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.982 [2024-09-28 16:13:20.612360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.982 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.241 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.241 "name": "Existed_Raid", 00:12:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.241 "strip_size_kb": 0, 00:12:06.241 "state": "configuring", 00:12:06.241 "raid_level": "raid1", 00:12:06.241 "superblock": false, 00:12:06.241 "num_base_bdevs": 4, 00:12:06.241 "num_base_bdevs_discovered": 2, 00:12:06.241 "num_base_bdevs_operational": 4, 00:12:06.241 "base_bdevs_list": [ 00:12:06.241 { 00:12:06.241 "name": "BaseBdev1", 00:12:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.241 "is_configured": false, 00:12:06.241 "data_offset": 0, 00:12:06.241 "data_size": 0 00:12:06.241 }, 00:12:06.241 { 00:12:06.241 "name": null, 00:12:06.241 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:06.241 "is_configured": false, 00:12:06.241 "data_offset": 0, 00:12:06.241 "data_size": 65536 00:12:06.241 }, 00:12:06.241 { 00:12:06.241 "name": "BaseBdev3", 00:12:06.241 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:06.241 "is_configured": true, 00:12:06.241 "data_offset": 0, 00:12:06.241 "data_size": 65536 00:12:06.241 }, 00:12:06.241 { 00:12:06.241 "name": "BaseBdev4", 00:12:06.241 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:06.241 "is_configured": true, 00:12:06.241 "data_offset": 0, 00:12:06.241 "data_size": 65536 00:12:06.241 } 00:12:06.241 ] 00:12:06.241 }' 00:12:06.241 16:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.241 16:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.500 [2024-09-28 16:13:21.149489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.500 BaseBdev1 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.500 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.500 [ 00:12:06.500 { 00:12:06.500 "name": "BaseBdev1", 00:12:06.500 "aliases": [ 00:12:06.500 "31a4abc8-dd43-40f5-abca-2ac210f32e19" 00:12:06.500 ], 00:12:06.500 "product_name": "Malloc disk", 00:12:06.500 "block_size": 512, 00:12:06.500 "num_blocks": 65536, 00:12:06.500 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:06.500 "assigned_rate_limits": { 00:12:06.500 "rw_ios_per_sec": 0, 00:12:06.500 "rw_mbytes_per_sec": 0, 00:12:06.500 "r_mbytes_per_sec": 0, 00:12:06.500 "w_mbytes_per_sec": 0 00:12:06.500 }, 00:12:06.500 "claimed": true, 00:12:06.500 "claim_type": "exclusive_write", 00:12:06.500 "zoned": false, 00:12:06.500 "supported_io_types": { 00:12:06.500 "read": true, 00:12:06.500 "write": true, 00:12:06.500 "unmap": true, 00:12:06.500 "flush": true, 00:12:06.500 "reset": true, 00:12:06.500 "nvme_admin": false, 00:12:06.500 "nvme_io": false, 00:12:06.500 "nvme_io_md": false, 00:12:06.500 "write_zeroes": true, 00:12:06.500 "zcopy": true, 00:12:06.500 "get_zone_info": false, 00:12:06.500 "zone_management": false, 00:12:06.500 "zone_append": false, 00:12:06.500 "compare": false, 00:12:06.500 "compare_and_write": false, 00:12:06.500 "abort": true, 00:12:06.500 "seek_hole": false, 00:12:06.500 "seek_data": false, 00:12:06.500 "copy": true, 00:12:06.500 "nvme_iov_md": false 00:12:06.500 }, 00:12:06.760 "memory_domains": [ 00:12:06.760 { 00:12:06.760 "dma_device_id": "system", 00:12:06.760 "dma_device_type": 1 00:12:06.760 }, 00:12:06.760 { 00:12:06.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.760 "dma_device_type": 2 00:12:06.760 } 00:12:06.760 ], 00:12:06.760 "driver_specific": {} 00:12:06.760 } 00:12:06.760 ] 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.760 "name": "Existed_Raid", 00:12:06.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.760 "strip_size_kb": 0, 00:12:06.760 "state": "configuring", 00:12:06.760 "raid_level": "raid1", 00:12:06.760 "superblock": false, 00:12:06.760 "num_base_bdevs": 4, 00:12:06.760 "num_base_bdevs_discovered": 3, 00:12:06.760 "num_base_bdevs_operational": 4, 00:12:06.760 "base_bdevs_list": [ 00:12:06.760 { 00:12:06.760 "name": "BaseBdev1", 00:12:06.760 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:06.760 "is_configured": true, 00:12:06.760 "data_offset": 0, 00:12:06.760 "data_size": 65536 00:12:06.760 }, 00:12:06.760 { 00:12:06.760 "name": null, 00:12:06.760 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:06.760 "is_configured": false, 00:12:06.760 "data_offset": 0, 00:12:06.760 "data_size": 65536 00:12:06.760 }, 00:12:06.760 { 00:12:06.760 "name": "BaseBdev3", 00:12:06.760 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:06.760 "is_configured": true, 00:12:06.760 "data_offset": 0, 00:12:06.760 "data_size": 65536 00:12:06.760 }, 00:12:06.760 { 00:12:06.760 "name": "BaseBdev4", 00:12:06.760 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:06.760 "is_configured": true, 00:12:06.760 "data_offset": 0, 00:12:06.760 "data_size": 65536 00:12:06.760 } 00:12:06.760 ] 00:12:06.760 }' 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.760 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.019 [2024-09-28 16:13:21.640713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.019 "name": "Existed_Raid", 00:12:07.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.019 "strip_size_kb": 0, 00:12:07.019 "state": "configuring", 00:12:07.019 "raid_level": "raid1", 00:12:07.019 "superblock": false, 00:12:07.019 "num_base_bdevs": 4, 00:12:07.019 "num_base_bdevs_discovered": 2, 00:12:07.019 "num_base_bdevs_operational": 4, 00:12:07.019 "base_bdevs_list": [ 00:12:07.019 { 00:12:07.019 "name": "BaseBdev1", 00:12:07.019 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:07.019 "is_configured": true, 00:12:07.019 "data_offset": 0, 00:12:07.019 "data_size": 65536 00:12:07.019 }, 00:12:07.019 { 00:12:07.019 "name": null, 00:12:07.019 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:07.019 "is_configured": false, 00:12:07.019 "data_offset": 0, 00:12:07.019 "data_size": 65536 00:12:07.019 }, 00:12:07.019 { 00:12:07.019 "name": null, 00:12:07.019 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:07.019 "is_configured": false, 00:12:07.019 "data_offset": 0, 00:12:07.019 "data_size": 65536 00:12:07.019 }, 00:12:07.019 { 00:12:07.019 "name": "BaseBdev4", 00:12:07.019 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:07.019 "is_configured": true, 00:12:07.019 "data_offset": 0, 00:12:07.019 "data_size": 65536 00:12:07.019 } 00:12:07.019 ] 00:12:07.019 }' 00:12:07.019 16:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.279 16:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.538 [2024-09-28 16:13:22.159864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.538 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.538 "name": "Existed_Raid", 00:12:07.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.538 "strip_size_kb": 0, 00:12:07.538 "state": "configuring", 00:12:07.538 "raid_level": "raid1", 00:12:07.538 "superblock": false, 00:12:07.538 "num_base_bdevs": 4, 00:12:07.538 "num_base_bdevs_discovered": 3, 00:12:07.538 "num_base_bdevs_operational": 4, 00:12:07.538 "base_bdevs_list": [ 00:12:07.538 { 00:12:07.538 "name": "BaseBdev1", 00:12:07.538 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:07.538 "is_configured": true, 00:12:07.538 "data_offset": 0, 00:12:07.538 "data_size": 65536 00:12:07.538 }, 00:12:07.538 { 00:12:07.539 "name": null, 00:12:07.539 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:07.539 "is_configured": false, 00:12:07.539 "data_offset": 0, 00:12:07.539 "data_size": 65536 00:12:07.539 }, 00:12:07.539 { 00:12:07.539 "name": "BaseBdev3", 00:12:07.539 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:07.539 "is_configured": true, 00:12:07.539 "data_offset": 0, 00:12:07.539 "data_size": 65536 00:12:07.539 }, 00:12:07.539 { 00:12:07.539 "name": "BaseBdev4", 00:12:07.539 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:07.539 "is_configured": true, 00:12:07.539 "data_offset": 0, 00:12:07.539 "data_size": 65536 00:12:07.539 } 00:12:07.539 ] 00:12:07.539 }' 00:12:07.539 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.539 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.108 [2024-09-28 16:13:22.623092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.108 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.109 "name": "Existed_Raid", 00:12:08.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.109 "strip_size_kb": 0, 00:12:08.109 "state": "configuring", 00:12:08.109 "raid_level": "raid1", 00:12:08.109 "superblock": false, 00:12:08.109 "num_base_bdevs": 4, 00:12:08.109 "num_base_bdevs_discovered": 2, 00:12:08.109 "num_base_bdevs_operational": 4, 00:12:08.109 "base_bdevs_list": [ 00:12:08.109 { 00:12:08.109 "name": null, 00:12:08.109 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:08.109 "is_configured": false, 00:12:08.109 "data_offset": 0, 00:12:08.109 "data_size": 65536 00:12:08.109 }, 00:12:08.109 { 00:12:08.109 "name": null, 00:12:08.109 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:08.109 "is_configured": false, 00:12:08.109 "data_offset": 0, 00:12:08.109 "data_size": 65536 00:12:08.109 }, 00:12:08.109 { 00:12:08.109 "name": "BaseBdev3", 00:12:08.109 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:08.109 "is_configured": true, 00:12:08.109 "data_offset": 0, 00:12:08.109 "data_size": 65536 00:12:08.109 }, 00:12:08.109 { 00:12:08.109 "name": "BaseBdev4", 00:12:08.109 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:08.109 "is_configured": true, 00:12:08.109 "data_offset": 0, 00:12:08.109 "data_size": 65536 00:12:08.109 } 00:12:08.109 ] 00:12:08.109 }' 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.109 16:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.700 [2024-09-28 16:13:23.273672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.700 "name": "Existed_Raid", 00:12:08.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.700 "strip_size_kb": 0, 00:12:08.700 "state": "configuring", 00:12:08.700 "raid_level": "raid1", 00:12:08.700 "superblock": false, 00:12:08.700 "num_base_bdevs": 4, 00:12:08.700 "num_base_bdevs_discovered": 3, 00:12:08.700 "num_base_bdevs_operational": 4, 00:12:08.700 "base_bdevs_list": [ 00:12:08.700 { 00:12:08.700 "name": null, 00:12:08.700 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:08.700 "is_configured": false, 00:12:08.700 "data_offset": 0, 00:12:08.700 "data_size": 65536 00:12:08.700 }, 00:12:08.700 { 00:12:08.700 "name": "BaseBdev2", 00:12:08.700 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:08.700 "is_configured": true, 00:12:08.700 "data_offset": 0, 00:12:08.700 "data_size": 65536 00:12:08.700 }, 00:12:08.700 { 00:12:08.700 "name": "BaseBdev3", 00:12:08.700 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:08.700 "is_configured": true, 00:12:08.700 "data_offset": 0, 00:12:08.700 "data_size": 65536 00:12:08.700 }, 00:12:08.700 { 00:12:08.700 "name": "BaseBdev4", 00:12:08.700 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:08.700 "is_configured": true, 00:12:08.700 "data_offset": 0, 00:12:08.700 "data_size": 65536 00:12:08.700 } 00:12:08.700 ] 00:12:08.700 }' 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.700 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 31a4abc8-dd43-40f5-abca-2ac210f32e19 00:12:09.270 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.271 [2024-09-28 16:13:23.805945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:09.271 [2024-09-28 16:13:23.806066] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.271 [2024-09-28 16:13:23.806097] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:09.271 [2024-09-28 16:13:23.806468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:09.271 [2024-09-28 16:13:23.806694] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.271 [2024-09-28 16:13:23.806736] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:09.271 [2024-09-28 16:13:23.807041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.271 NewBaseBdev 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.271 [ 00:12:09.271 { 00:12:09.271 "name": "NewBaseBdev", 00:12:09.271 "aliases": [ 00:12:09.271 "31a4abc8-dd43-40f5-abca-2ac210f32e19" 00:12:09.271 ], 00:12:09.271 "product_name": "Malloc disk", 00:12:09.271 "block_size": 512, 00:12:09.271 "num_blocks": 65536, 00:12:09.271 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:09.271 "assigned_rate_limits": { 00:12:09.271 "rw_ios_per_sec": 0, 00:12:09.271 "rw_mbytes_per_sec": 0, 00:12:09.271 "r_mbytes_per_sec": 0, 00:12:09.271 "w_mbytes_per_sec": 0 00:12:09.271 }, 00:12:09.271 "claimed": true, 00:12:09.271 "claim_type": "exclusive_write", 00:12:09.271 "zoned": false, 00:12:09.271 "supported_io_types": { 00:12:09.271 "read": true, 00:12:09.271 "write": true, 00:12:09.271 "unmap": true, 00:12:09.271 "flush": true, 00:12:09.271 "reset": true, 00:12:09.271 "nvme_admin": false, 00:12:09.271 "nvme_io": false, 00:12:09.271 "nvme_io_md": false, 00:12:09.271 "write_zeroes": true, 00:12:09.271 "zcopy": true, 00:12:09.271 "get_zone_info": false, 00:12:09.271 "zone_management": false, 00:12:09.271 "zone_append": false, 00:12:09.271 "compare": false, 00:12:09.271 "compare_and_write": false, 00:12:09.271 "abort": true, 00:12:09.271 "seek_hole": false, 00:12:09.271 "seek_data": false, 00:12:09.271 "copy": true, 00:12:09.271 "nvme_iov_md": false 00:12:09.271 }, 00:12:09.271 "memory_domains": [ 00:12:09.271 { 00:12:09.271 "dma_device_id": "system", 00:12:09.271 "dma_device_type": 1 00:12:09.271 }, 00:12:09.271 { 00:12:09.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.271 "dma_device_type": 2 00:12:09.271 } 00:12:09.271 ], 00:12:09.271 "driver_specific": {} 00:12:09.271 } 00:12:09.271 ] 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.271 "name": "Existed_Raid", 00:12:09.271 "uuid": "273f8ee3-f315-42f3-b1ed-403b80e1906c", 00:12:09.271 "strip_size_kb": 0, 00:12:09.271 "state": "online", 00:12:09.271 "raid_level": "raid1", 00:12:09.271 "superblock": false, 00:12:09.271 "num_base_bdevs": 4, 00:12:09.271 "num_base_bdevs_discovered": 4, 00:12:09.271 "num_base_bdevs_operational": 4, 00:12:09.271 "base_bdevs_list": [ 00:12:09.271 { 00:12:09.271 "name": "NewBaseBdev", 00:12:09.271 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:09.271 "is_configured": true, 00:12:09.271 "data_offset": 0, 00:12:09.271 "data_size": 65536 00:12:09.271 }, 00:12:09.271 { 00:12:09.271 "name": "BaseBdev2", 00:12:09.271 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:09.271 "is_configured": true, 00:12:09.271 "data_offset": 0, 00:12:09.271 "data_size": 65536 00:12:09.271 }, 00:12:09.271 { 00:12:09.271 "name": "BaseBdev3", 00:12:09.271 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:09.271 "is_configured": true, 00:12:09.271 "data_offset": 0, 00:12:09.271 "data_size": 65536 00:12:09.271 }, 00:12:09.271 { 00:12:09.271 "name": "BaseBdev4", 00:12:09.271 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:09.271 "is_configured": true, 00:12:09.271 "data_offset": 0, 00:12:09.271 "data_size": 65536 00:12:09.271 } 00:12:09.271 ] 00:12:09.271 }' 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.271 16:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.840 [2024-09-28 16:13:24.297444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.840 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.840 "name": "Existed_Raid", 00:12:09.840 "aliases": [ 00:12:09.840 "273f8ee3-f315-42f3-b1ed-403b80e1906c" 00:12:09.840 ], 00:12:09.840 "product_name": "Raid Volume", 00:12:09.840 "block_size": 512, 00:12:09.840 "num_blocks": 65536, 00:12:09.840 "uuid": "273f8ee3-f315-42f3-b1ed-403b80e1906c", 00:12:09.840 "assigned_rate_limits": { 00:12:09.840 "rw_ios_per_sec": 0, 00:12:09.840 "rw_mbytes_per_sec": 0, 00:12:09.840 "r_mbytes_per_sec": 0, 00:12:09.840 "w_mbytes_per_sec": 0 00:12:09.840 }, 00:12:09.840 "claimed": false, 00:12:09.840 "zoned": false, 00:12:09.840 "supported_io_types": { 00:12:09.840 "read": true, 00:12:09.840 "write": true, 00:12:09.840 "unmap": false, 00:12:09.840 "flush": false, 00:12:09.841 "reset": true, 00:12:09.841 "nvme_admin": false, 00:12:09.841 "nvme_io": false, 00:12:09.841 "nvme_io_md": false, 00:12:09.841 "write_zeroes": true, 00:12:09.841 "zcopy": false, 00:12:09.841 "get_zone_info": false, 00:12:09.841 "zone_management": false, 00:12:09.841 "zone_append": false, 00:12:09.841 "compare": false, 00:12:09.841 "compare_and_write": false, 00:12:09.841 "abort": false, 00:12:09.841 "seek_hole": false, 00:12:09.841 "seek_data": false, 00:12:09.841 "copy": false, 00:12:09.841 "nvme_iov_md": false 00:12:09.841 }, 00:12:09.841 "memory_domains": [ 00:12:09.841 { 00:12:09.841 "dma_device_id": "system", 00:12:09.841 "dma_device_type": 1 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.841 "dma_device_type": 2 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "dma_device_id": "system", 00:12:09.841 "dma_device_type": 1 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.841 "dma_device_type": 2 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "dma_device_id": "system", 00:12:09.841 "dma_device_type": 1 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.841 "dma_device_type": 2 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "dma_device_id": "system", 00:12:09.841 "dma_device_type": 1 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.841 "dma_device_type": 2 00:12:09.841 } 00:12:09.841 ], 00:12:09.841 "driver_specific": { 00:12:09.841 "raid": { 00:12:09.841 "uuid": "273f8ee3-f315-42f3-b1ed-403b80e1906c", 00:12:09.841 "strip_size_kb": 0, 00:12:09.841 "state": "online", 00:12:09.841 "raid_level": "raid1", 00:12:09.841 "superblock": false, 00:12:09.841 "num_base_bdevs": 4, 00:12:09.841 "num_base_bdevs_discovered": 4, 00:12:09.841 "num_base_bdevs_operational": 4, 00:12:09.841 "base_bdevs_list": [ 00:12:09.841 { 00:12:09.841 "name": "NewBaseBdev", 00:12:09.841 "uuid": "31a4abc8-dd43-40f5-abca-2ac210f32e19", 00:12:09.841 "is_configured": true, 00:12:09.841 "data_offset": 0, 00:12:09.841 "data_size": 65536 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "name": "BaseBdev2", 00:12:09.841 "uuid": "bf1c9cb0-b51e-4f02-abd0-d67a748e4910", 00:12:09.841 "is_configured": true, 00:12:09.841 "data_offset": 0, 00:12:09.841 "data_size": 65536 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "name": "BaseBdev3", 00:12:09.841 "uuid": "d42e0e9d-6022-4143-81bd-5eca4f02ceda", 00:12:09.841 "is_configured": true, 00:12:09.841 "data_offset": 0, 00:12:09.841 "data_size": 65536 00:12:09.841 }, 00:12:09.841 { 00:12:09.841 "name": "BaseBdev4", 00:12:09.841 "uuid": "7e1b8470-ae00-4dc9-87af-d8f2627c59ac", 00:12:09.841 "is_configured": true, 00:12:09.841 "data_offset": 0, 00:12:09.841 "data_size": 65536 00:12:09.841 } 00:12:09.841 ] 00:12:09.841 } 00:12:09.841 } 00:12:09.841 }' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:09.841 BaseBdev2 00:12:09.841 BaseBdev3 00:12:09.841 BaseBdev4' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.841 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.100 [2024-09-28 16:13:24.616561] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.100 [2024-09-28 16:13:24.616633] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.100 [2024-09-28 16:13:24.616721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.100 [2024-09-28 16:13:24.617037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.100 [2024-09-28 16:13:24.617051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73222 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73222 ']' 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73222 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73222 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.100 killing process with pid 73222 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73222' 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73222 00:12:10.100 [2024-09-28 16:13:24.666097] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.100 16:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73222 00:12:10.671 [2024-09-28 16:13:25.077487] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:12.052 00:12:12.052 real 0m11.744s 00:12:12.052 user 0m18.205s 00:12:12.052 sys 0m2.292s 00:12:12.052 ************************************ 00:12:12.052 END TEST raid_state_function_test 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.052 ************************************ 00:12:12.052 16:13:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:12.052 16:13:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:12.052 16:13:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.052 16:13:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.052 ************************************ 00:12:12.052 START TEST raid_state_function_test_sb 00:12:12.052 ************************************ 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73889 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73889' 00:12:12.052 Process raid pid: 73889 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73889 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73889 ']' 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.052 16:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.052 [2024-09-28 16:13:26.582078] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:12.052 [2024-09-28 16:13:26.582266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.312 [2024-09-28 16:13:26.748448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.312 [2024-09-28 16:13:26.988287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.570 [2024-09-28 16:13:27.213770] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.570 [2024-09-28 16:13:27.213821] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.830 [2024-09-28 16:13:27.408426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.830 [2024-09-28 16:13:27.408490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.830 [2024-09-28 16:13:27.408500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.830 [2024-09-28 16:13:27.408510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.830 [2024-09-28 16:13:27.408516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.830 [2024-09-28 16:13:27.408527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.830 [2024-09-28 16:13:27.408532] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.830 [2024-09-28 16:13:27.408542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.830 "name": "Existed_Raid", 00:12:12.830 "uuid": "342eb841-a175-4f96-9cbd-78ca9a38f724", 00:12:12.830 "strip_size_kb": 0, 00:12:12.830 "state": "configuring", 00:12:12.830 "raid_level": "raid1", 00:12:12.830 "superblock": true, 00:12:12.830 "num_base_bdevs": 4, 00:12:12.830 "num_base_bdevs_discovered": 0, 00:12:12.830 "num_base_bdevs_operational": 4, 00:12:12.830 "base_bdevs_list": [ 00:12:12.830 { 00:12:12.830 "name": "BaseBdev1", 00:12:12.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.830 "is_configured": false, 00:12:12.830 "data_offset": 0, 00:12:12.830 "data_size": 0 00:12:12.830 }, 00:12:12.830 { 00:12:12.830 "name": "BaseBdev2", 00:12:12.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.830 "is_configured": false, 00:12:12.830 "data_offset": 0, 00:12:12.830 "data_size": 0 00:12:12.830 }, 00:12:12.830 { 00:12:12.830 "name": "BaseBdev3", 00:12:12.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.830 "is_configured": false, 00:12:12.830 "data_offset": 0, 00:12:12.830 "data_size": 0 00:12:12.830 }, 00:12:12.830 { 00:12:12.830 "name": "BaseBdev4", 00:12:12.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.830 "is_configured": false, 00:12:12.830 "data_offset": 0, 00:12:12.830 "data_size": 0 00:12:12.830 } 00:12:12.830 ] 00:12:12.830 }' 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.830 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 [2024-09-28 16:13:27.879533] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.400 [2024-09-28 16:13:27.879647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 [2024-09-28 16:13:27.891522] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.400 [2024-09-28 16:13:27.891606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.400 [2024-09-28 16:13:27.891636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.400 [2024-09-28 16:13:27.891660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.400 [2024-09-28 16:13:27.891678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.400 [2024-09-28 16:13:27.891743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.400 [2024-09-28 16:13:27.891810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.400 [2024-09-28 16:13:27.891833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 [2024-09-28 16:13:27.958682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.400 BaseBdev1 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 [ 00:12:13.400 { 00:12:13.400 "name": "BaseBdev1", 00:12:13.400 "aliases": [ 00:12:13.400 "79603953-28b8-4e5c-ba55-e8910fc387b4" 00:12:13.400 ], 00:12:13.400 "product_name": "Malloc disk", 00:12:13.400 "block_size": 512, 00:12:13.400 "num_blocks": 65536, 00:12:13.400 "uuid": "79603953-28b8-4e5c-ba55-e8910fc387b4", 00:12:13.400 "assigned_rate_limits": { 00:12:13.400 "rw_ios_per_sec": 0, 00:12:13.400 "rw_mbytes_per_sec": 0, 00:12:13.400 "r_mbytes_per_sec": 0, 00:12:13.400 "w_mbytes_per_sec": 0 00:12:13.400 }, 00:12:13.400 "claimed": true, 00:12:13.400 "claim_type": "exclusive_write", 00:12:13.400 "zoned": false, 00:12:13.400 "supported_io_types": { 00:12:13.400 "read": true, 00:12:13.400 "write": true, 00:12:13.400 "unmap": true, 00:12:13.400 "flush": true, 00:12:13.400 "reset": true, 00:12:13.400 "nvme_admin": false, 00:12:13.400 "nvme_io": false, 00:12:13.400 "nvme_io_md": false, 00:12:13.400 "write_zeroes": true, 00:12:13.400 "zcopy": true, 00:12:13.400 "get_zone_info": false, 00:12:13.400 "zone_management": false, 00:12:13.400 "zone_append": false, 00:12:13.400 "compare": false, 00:12:13.400 "compare_and_write": false, 00:12:13.400 "abort": true, 00:12:13.400 "seek_hole": false, 00:12:13.400 "seek_data": false, 00:12:13.400 "copy": true, 00:12:13.400 "nvme_iov_md": false 00:12:13.400 }, 00:12:13.400 "memory_domains": [ 00:12:13.400 { 00:12:13.400 "dma_device_id": "system", 00:12:13.400 "dma_device_type": 1 00:12:13.400 }, 00:12:13.400 { 00:12:13.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.400 "dma_device_type": 2 00:12:13.400 } 00:12:13.400 ], 00:12:13.400 "driver_specific": {} 00:12:13.400 } 00:12:13.400 ] 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.400 16:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.400 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.400 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.400 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.400 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.400 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.400 "name": "Existed_Raid", 00:12:13.400 "uuid": "de277b25-7782-46e7-a029-9ce4929b35d8", 00:12:13.400 "strip_size_kb": 0, 00:12:13.400 "state": "configuring", 00:12:13.400 "raid_level": "raid1", 00:12:13.400 "superblock": true, 00:12:13.400 "num_base_bdevs": 4, 00:12:13.400 "num_base_bdevs_discovered": 1, 00:12:13.400 "num_base_bdevs_operational": 4, 00:12:13.400 "base_bdevs_list": [ 00:12:13.400 { 00:12:13.400 "name": "BaseBdev1", 00:12:13.400 "uuid": "79603953-28b8-4e5c-ba55-e8910fc387b4", 00:12:13.400 "is_configured": true, 00:12:13.400 "data_offset": 2048, 00:12:13.400 "data_size": 63488 00:12:13.400 }, 00:12:13.400 { 00:12:13.400 "name": "BaseBdev2", 00:12:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.401 "is_configured": false, 00:12:13.401 "data_offset": 0, 00:12:13.401 "data_size": 0 00:12:13.401 }, 00:12:13.401 { 00:12:13.401 "name": "BaseBdev3", 00:12:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.401 "is_configured": false, 00:12:13.401 "data_offset": 0, 00:12:13.401 "data_size": 0 00:12:13.401 }, 00:12:13.401 { 00:12:13.401 "name": "BaseBdev4", 00:12:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.401 "is_configured": false, 00:12:13.401 "data_offset": 0, 00:12:13.401 "data_size": 0 00:12:13.401 } 00:12:13.401 ] 00:12:13.401 }' 00:12:13.401 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.401 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.970 [2024-09-28 16:13:28.481816] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.970 [2024-09-28 16:13:28.481864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.970 [2024-09-28 16:13:28.493852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.970 [2024-09-28 16:13:28.495933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.970 [2024-09-28 16:13:28.496030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.970 [2024-09-28 16:13:28.496044] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.970 [2024-09-28 16:13:28.496071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.970 [2024-09-28 16:13:28.496078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.970 [2024-09-28 16:13:28.496087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.970 "name": "Existed_Raid", 00:12:13.970 "uuid": "c6269c6e-ee99-4325-9a28-3996c2fce27f", 00:12:13.970 "strip_size_kb": 0, 00:12:13.970 "state": "configuring", 00:12:13.970 "raid_level": "raid1", 00:12:13.970 "superblock": true, 00:12:13.970 "num_base_bdevs": 4, 00:12:13.970 "num_base_bdevs_discovered": 1, 00:12:13.970 "num_base_bdevs_operational": 4, 00:12:13.970 "base_bdevs_list": [ 00:12:13.970 { 00:12:13.970 "name": "BaseBdev1", 00:12:13.970 "uuid": "79603953-28b8-4e5c-ba55-e8910fc387b4", 00:12:13.970 "is_configured": true, 00:12:13.970 "data_offset": 2048, 00:12:13.970 "data_size": 63488 00:12:13.970 }, 00:12:13.970 { 00:12:13.970 "name": "BaseBdev2", 00:12:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.970 "is_configured": false, 00:12:13.970 "data_offset": 0, 00:12:13.970 "data_size": 0 00:12:13.970 }, 00:12:13.970 { 00:12:13.970 "name": "BaseBdev3", 00:12:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.970 "is_configured": false, 00:12:13.970 "data_offset": 0, 00:12:13.970 "data_size": 0 00:12:13.970 }, 00:12:13.970 { 00:12:13.970 "name": "BaseBdev4", 00:12:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.970 "is_configured": false, 00:12:13.970 "data_offset": 0, 00:12:13.970 "data_size": 0 00:12:13.970 } 00:12:13.970 ] 00:12:13.970 }' 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.970 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.539 16:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.539 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.539 16:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.539 [2024-09-28 16:13:29.024870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.539 BaseBdev2 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.539 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.539 [ 00:12:14.539 { 00:12:14.539 "name": "BaseBdev2", 00:12:14.539 "aliases": [ 00:12:14.539 "0add80be-f7ae-459e-97f5-b272b9d40b9b" 00:12:14.539 ], 00:12:14.539 "product_name": "Malloc disk", 00:12:14.539 "block_size": 512, 00:12:14.539 "num_blocks": 65536, 00:12:14.539 "uuid": "0add80be-f7ae-459e-97f5-b272b9d40b9b", 00:12:14.539 "assigned_rate_limits": { 00:12:14.539 "rw_ios_per_sec": 0, 00:12:14.539 "rw_mbytes_per_sec": 0, 00:12:14.539 "r_mbytes_per_sec": 0, 00:12:14.539 "w_mbytes_per_sec": 0 00:12:14.539 }, 00:12:14.539 "claimed": true, 00:12:14.539 "claim_type": "exclusive_write", 00:12:14.539 "zoned": false, 00:12:14.539 "supported_io_types": { 00:12:14.539 "read": true, 00:12:14.539 "write": true, 00:12:14.539 "unmap": true, 00:12:14.539 "flush": true, 00:12:14.539 "reset": true, 00:12:14.539 "nvme_admin": false, 00:12:14.539 "nvme_io": false, 00:12:14.539 "nvme_io_md": false, 00:12:14.539 "write_zeroes": true, 00:12:14.539 "zcopy": true, 00:12:14.539 "get_zone_info": false, 00:12:14.539 "zone_management": false, 00:12:14.539 "zone_append": false, 00:12:14.539 "compare": false, 00:12:14.539 "compare_and_write": false, 00:12:14.539 "abort": true, 00:12:14.539 "seek_hole": false, 00:12:14.539 "seek_data": false, 00:12:14.539 "copy": true, 00:12:14.539 "nvme_iov_md": false 00:12:14.539 }, 00:12:14.539 "memory_domains": [ 00:12:14.539 { 00:12:14.539 "dma_device_id": "system", 00:12:14.539 "dma_device_type": 1 00:12:14.539 }, 00:12:14.539 { 00:12:14.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.539 "dma_device_type": 2 00:12:14.539 } 00:12:14.539 ], 00:12:14.539 "driver_specific": {} 00:12:14.539 } 00:12:14.539 ] 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.540 "name": "Existed_Raid", 00:12:14.540 "uuid": "c6269c6e-ee99-4325-9a28-3996c2fce27f", 00:12:14.540 "strip_size_kb": 0, 00:12:14.540 "state": "configuring", 00:12:14.540 "raid_level": "raid1", 00:12:14.540 "superblock": true, 00:12:14.540 "num_base_bdevs": 4, 00:12:14.540 "num_base_bdevs_discovered": 2, 00:12:14.540 "num_base_bdevs_operational": 4, 00:12:14.540 "base_bdevs_list": [ 00:12:14.540 { 00:12:14.540 "name": "BaseBdev1", 00:12:14.540 "uuid": "79603953-28b8-4e5c-ba55-e8910fc387b4", 00:12:14.540 "is_configured": true, 00:12:14.540 "data_offset": 2048, 00:12:14.540 "data_size": 63488 00:12:14.540 }, 00:12:14.540 { 00:12:14.540 "name": "BaseBdev2", 00:12:14.540 "uuid": "0add80be-f7ae-459e-97f5-b272b9d40b9b", 00:12:14.540 "is_configured": true, 00:12:14.540 "data_offset": 2048, 00:12:14.540 "data_size": 63488 00:12:14.540 }, 00:12:14.540 { 00:12:14.540 "name": "BaseBdev3", 00:12:14.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.540 "is_configured": false, 00:12:14.540 "data_offset": 0, 00:12:14.540 "data_size": 0 00:12:14.540 }, 00:12:14.540 { 00:12:14.540 "name": "BaseBdev4", 00:12:14.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.540 "is_configured": false, 00:12:14.540 "data_offset": 0, 00:12:14.540 "data_size": 0 00:12:14.540 } 00:12:14.540 ] 00:12:14.540 }' 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.540 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.109 [2024-09-28 16:13:29.530309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.109 BaseBdev3 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.109 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.110 [ 00:12:15.110 { 00:12:15.110 "name": "BaseBdev3", 00:12:15.110 "aliases": [ 00:12:15.110 "15c82c91-8e64-426e-83e5-337bda903975" 00:12:15.110 ], 00:12:15.110 "product_name": "Malloc disk", 00:12:15.110 "block_size": 512, 00:12:15.110 "num_blocks": 65536, 00:12:15.110 "uuid": "15c82c91-8e64-426e-83e5-337bda903975", 00:12:15.110 "assigned_rate_limits": { 00:12:15.110 "rw_ios_per_sec": 0, 00:12:15.110 "rw_mbytes_per_sec": 0, 00:12:15.110 "r_mbytes_per_sec": 0, 00:12:15.110 "w_mbytes_per_sec": 0 00:12:15.110 }, 00:12:15.110 "claimed": true, 00:12:15.110 "claim_type": "exclusive_write", 00:12:15.110 "zoned": false, 00:12:15.110 "supported_io_types": { 00:12:15.110 "read": true, 00:12:15.110 "write": true, 00:12:15.110 "unmap": true, 00:12:15.110 "flush": true, 00:12:15.110 "reset": true, 00:12:15.110 "nvme_admin": false, 00:12:15.110 "nvme_io": false, 00:12:15.110 "nvme_io_md": false, 00:12:15.110 "write_zeroes": true, 00:12:15.110 "zcopy": true, 00:12:15.110 "get_zone_info": false, 00:12:15.110 "zone_management": false, 00:12:15.110 "zone_append": false, 00:12:15.110 "compare": false, 00:12:15.110 "compare_and_write": false, 00:12:15.110 "abort": true, 00:12:15.110 "seek_hole": false, 00:12:15.110 "seek_data": false, 00:12:15.110 "copy": true, 00:12:15.110 "nvme_iov_md": false 00:12:15.110 }, 00:12:15.110 "memory_domains": [ 00:12:15.110 { 00:12:15.110 "dma_device_id": "system", 00:12:15.110 "dma_device_type": 1 00:12:15.110 }, 00:12:15.110 { 00:12:15.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.110 "dma_device_type": 2 00:12:15.110 } 00:12:15.110 ], 00:12:15.110 "driver_specific": {} 00:12:15.110 } 00:12:15.110 ] 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.110 "name": "Existed_Raid", 00:12:15.110 "uuid": "c6269c6e-ee99-4325-9a28-3996c2fce27f", 00:12:15.110 "strip_size_kb": 0, 00:12:15.110 "state": "configuring", 00:12:15.110 "raid_level": "raid1", 00:12:15.110 "superblock": true, 00:12:15.110 "num_base_bdevs": 4, 00:12:15.110 "num_base_bdevs_discovered": 3, 00:12:15.110 "num_base_bdevs_operational": 4, 00:12:15.110 "base_bdevs_list": [ 00:12:15.110 { 00:12:15.110 "name": "BaseBdev1", 00:12:15.110 "uuid": "79603953-28b8-4e5c-ba55-e8910fc387b4", 00:12:15.110 "is_configured": true, 00:12:15.110 "data_offset": 2048, 00:12:15.110 "data_size": 63488 00:12:15.110 }, 00:12:15.110 { 00:12:15.110 "name": "BaseBdev2", 00:12:15.110 "uuid": "0add80be-f7ae-459e-97f5-b272b9d40b9b", 00:12:15.110 "is_configured": true, 00:12:15.110 "data_offset": 2048, 00:12:15.110 "data_size": 63488 00:12:15.110 }, 00:12:15.110 { 00:12:15.110 "name": "BaseBdev3", 00:12:15.110 "uuid": "15c82c91-8e64-426e-83e5-337bda903975", 00:12:15.110 "is_configured": true, 00:12:15.110 "data_offset": 2048, 00:12:15.110 "data_size": 63488 00:12:15.110 }, 00:12:15.110 { 00:12:15.110 "name": "BaseBdev4", 00:12:15.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.110 "is_configured": false, 00:12:15.110 "data_offset": 0, 00:12:15.110 "data_size": 0 00:12:15.110 } 00:12:15.110 ] 00:12:15.110 }' 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.110 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.370 16:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.370 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.370 16:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.370 [2024-09-28 16:13:30.027734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.370 [2024-09-28 16:13:30.028139] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:15.370 [2024-09-28 16:13:30.028197] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.370 [2024-09-28 16:13:30.028539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:15.370 [2024-09-28 16:13:30.028742] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:15.370 BaseBdev4 00:12:15.370 [2024-09-28 16:13:30.028792] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:15.370 [2024-09-28 16:13:30.028980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.370 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.370 [ 00:12:15.370 { 00:12:15.370 "name": "BaseBdev4", 00:12:15.370 "aliases": [ 00:12:15.370 "93002eef-bc61-4e24-bb99-ceecb656a465" 00:12:15.370 ], 00:12:15.629 "product_name": "Malloc disk", 00:12:15.629 "block_size": 512, 00:12:15.629 "num_blocks": 65536, 00:12:15.629 "uuid": "93002eef-bc61-4e24-bb99-ceecb656a465", 00:12:15.629 "assigned_rate_limits": { 00:12:15.629 "rw_ios_per_sec": 0, 00:12:15.629 "rw_mbytes_per_sec": 0, 00:12:15.629 "r_mbytes_per_sec": 0, 00:12:15.629 "w_mbytes_per_sec": 0 00:12:15.629 }, 00:12:15.629 "claimed": true, 00:12:15.629 "claim_type": "exclusive_write", 00:12:15.629 "zoned": false, 00:12:15.629 "supported_io_types": { 00:12:15.629 "read": true, 00:12:15.629 "write": true, 00:12:15.629 "unmap": true, 00:12:15.629 "flush": true, 00:12:15.629 "reset": true, 00:12:15.629 "nvme_admin": false, 00:12:15.629 "nvme_io": false, 00:12:15.629 "nvme_io_md": false, 00:12:15.629 "write_zeroes": true, 00:12:15.629 "zcopy": true, 00:12:15.629 "get_zone_info": false, 00:12:15.629 "zone_management": false, 00:12:15.629 "zone_append": false, 00:12:15.629 "compare": false, 00:12:15.629 "compare_and_write": false, 00:12:15.629 "abort": true, 00:12:15.629 "seek_hole": false, 00:12:15.629 "seek_data": false, 00:12:15.629 "copy": true, 00:12:15.629 "nvme_iov_md": false 00:12:15.629 }, 00:12:15.629 "memory_domains": [ 00:12:15.629 { 00:12:15.629 "dma_device_id": "system", 00:12:15.629 "dma_device_type": 1 00:12:15.629 }, 00:12:15.629 { 00:12:15.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.629 "dma_device_type": 2 00:12:15.629 } 00:12:15.629 ], 00:12:15.629 "driver_specific": {} 00:12:15.629 } 00:12:15.629 ] 00:12:15.629 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.629 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:15.629 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.629 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.629 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.630 "name": "Existed_Raid", 00:12:15.630 "uuid": "c6269c6e-ee99-4325-9a28-3996c2fce27f", 00:12:15.630 "strip_size_kb": 0, 00:12:15.630 "state": "online", 00:12:15.630 "raid_level": "raid1", 00:12:15.630 "superblock": true, 00:12:15.630 "num_base_bdevs": 4, 00:12:15.630 "num_base_bdevs_discovered": 4, 00:12:15.630 "num_base_bdevs_operational": 4, 00:12:15.630 "base_bdevs_list": [ 00:12:15.630 { 00:12:15.630 "name": "BaseBdev1", 00:12:15.630 "uuid": "79603953-28b8-4e5c-ba55-e8910fc387b4", 00:12:15.630 "is_configured": true, 00:12:15.630 "data_offset": 2048, 00:12:15.630 "data_size": 63488 00:12:15.630 }, 00:12:15.630 { 00:12:15.630 "name": "BaseBdev2", 00:12:15.630 "uuid": "0add80be-f7ae-459e-97f5-b272b9d40b9b", 00:12:15.630 "is_configured": true, 00:12:15.630 "data_offset": 2048, 00:12:15.630 "data_size": 63488 00:12:15.630 }, 00:12:15.630 { 00:12:15.630 "name": "BaseBdev3", 00:12:15.630 "uuid": "15c82c91-8e64-426e-83e5-337bda903975", 00:12:15.630 "is_configured": true, 00:12:15.630 "data_offset": 2048, 00:12:15.630 "data_size": 63488 00:12:15.630 }, 00:12:15.630 { 00:12:15.630 "name": "BaseBdev4", 00:12:15.630 "uuid": "93002eef-bc61-4e24-bb99-ceecb656a465", 00:12:15.630 "is_configured": true, 00:12:15.630 "data_offset": 2048, 00:12:15.630 "data_size": 63488 00:12:15.630 } 00:12:15.630 ] 00:12:15.630 }' 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.630 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.890 [2024-09-28 16:13:30.455350] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.890 "name": "Existed_Raid", 00:12:15.890 "aliases": [ 00:12:15.890 "c6269c6e-ee99-4325-9a28-3996c2fce27f" 00:12:15.890 ], 00:12:15.890 "product_name": "Raid Volume", 00:12:15.890 "block_size": 512, 00:12:15.890 "num_blocks": 63488, 00:12:15.890 "uuid": "c6269c6e-ee99-4325-9a28-3996c2fce27f", 00:12:15.890 "assigned_rate_limits": { 00:12:15.890 "rw_ios_per_sec": 0, 00:12:15.890 "rw_mbytes_per_sec": 0, 00:12:15.890 "r_mbytes_per_sec": 0, 00:12:15.890 "w_mbytes_per_sec": 0 00:12:15.890 }, 00:12:15.890 "claimed": false, 00:12:15.890 "zoned": false, 00:12:15.890 "supported_io_types": { 00:12:15.890 "read": true, 00:12:15.890 "write": true, 00:12:15.890 "unmap": false, 00:12:15.890 "flush": false, 00:12:15.890 "reset": true, 00:12:15.890 "nvme_admin": false, 00:12:15.890 "nvme_io": false, 00:12:15.890 "nvme_io_md": false, 00:12:15.890 "write_zeroes": true, 00:12:15.890 "zcopy": false, 00:12:15.890 "get_zone_info": false, 00:12:15.890 "zone_management": false, 00:12:15.890 "zone_append": false, 00:12:15.890 "compare": false, 00:12:15.890 "compare_and_write": false, 00:12:15.890 "abort": false, 00:12:15.890 "seek_hole": false, 00:12:15.890 "seek_data": false, 00:12:15.890 "copy": false, 00:12:15.890 "nvme_iov_md": false 00:12:15.890 }, 00:12:15.890 "memory_domains": [ 00:12:15.890 { 00:12:15.890 "dma_device_id": "system", 00:12:15.890 "dma_device_type": 1 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.890 "dma_device_type": 2 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "dma_device_id": "system", 00:12:15.890 "dma_device_type": 1 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.890 "dma_device_type": 2 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "dma_device_id": "system", 00:12:15.890 "dma_device_type": 1 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.890 "dma_device_type": 2 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "dma_device_id": "system", 00:12:15.890 "dma_device_type": 1 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.890 "dma_device_type": 2 00:12:15.890 } 00:12:15.890 ], 00:12:15.890 "driver_specific": { 00:12:15.890 "raid": { 00:12:15.890 "uuid": "c6269c6e-ee99-4325-9a28-3996c2fce27f", 00:12:15.890 "strip_size_kb": 0, 00:12:15.890 "state": "online", 00:12:15.890 "raid_level": "raid1", 00:12:15.890 "superblock": true, 00:12:15.890 "num_base_bdevs": 4, 00:12:15.890 "num_base_bdevs_discovered": 4, 00:12:15.890 "num_base_bdevs_operational": 4, 00:12:15.890 "base_bdevs_list": [ 00:12:15.890 { 00:12:15.890 "name": "BaseBdev1", 00:12:15.890 "uuid": "79603953-28b8-4e5c-ba55-e8910fc387b4", 00:12:15.890 "is_configured": true, 00:12:15.890 "data_offset": 2048, 00:12:15.890 "data_size": 63488 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "name": "BaseBdev2", 00:12:15.890 "uuid": "0add80be-f7ae-459e-97f5-b272b9d40b9b", 00:12:15.890 "is_configured": true, 00:12:15.890 "data_offset": 2048, 00:12:15.890 "data_size": 63488 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "name": "BaseBdev3", 00:12:15.890 "uuid": "15c82c91-8e64-426e-83e5-337bda903975", 00:12:15.890 "is_configured": true, 00:12:15.890 "data_offset": 2048, 00:12:15.890 "data_size": 63488 00:12:15.890 }, 00:12:15.890 { 00:12:15.890 "name": "BaseBdev4", 00:12:15.890 "uuid": "93002eef-bc61-4e24-bb99-ceecb656a465", 00:12:15.890 "is_configured": true, 00:12:15.890 "data_offset": 2048, 00:12:15.890 "data_size": 63488 00:12:15.890 } 00:12:15.890 ] 00:12:15.890 } 00:12:15.890 } 00:12:15.890 }' 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:15.890 BaseBdev2 00:12:15.890 BaseBdev3 00:12:15.890 BaseBdev4' 00:12:15.890 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.150 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.150 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.151 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.151 [2024-09-28 16:13:30.786505] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.410 "name": "Existed_Raid", 00:12:16.410 "uuid": "c6269c6e-ee99-4325-9a28-3996c2fce27f", 00:12:16.410 "strip_size_kb": 0, 00:12:16.410 "state": "online", 00:12:16.410 "raid_level": "raid1", 00:12:16.410 "superblock": true, 00:12:16.410 "num_base_bdevs": 4, 00:12:16.410 "num_base_bdevs_discovered": 3, 00:12:16.410 "num_base_bdevs_operational": 3, 00:12:16.410 "base_bdevs_list": [ 00:12:16.410 { 00:12:16.410 "name": null, 00:12:16.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.410 "is_configured": false, 00:12:16.410 "data_offset": 0, 00:12:16.410 "data_size": 63488 00:12:16.410 }, 00:12:16.410 { 00:12:16.410 "name": "BaseBdev2", 00:12:16.410 "uuid": "0add80be-f7ae-459e-97f5-b272b9d40b9b", 00:12:16.410 "is_configured": true, 00:12:16.410 "data_offset": 2048, 00:12:16.410 "data_size": 63488 00:12:16.410 }, 00:12:16.410 { 00:12:16.410 "name": "BaseBdev3", 00:12:16.410 "uuid": "15c82c91-8e64-426e-83e5-337bda903975", 00:12:16.410 "is_configured": true, 00:12:16.410 "data_offset": 2048, 00:12:16.410 "data_size": 63488 00:12:16.410 }, 00:12:16.410 { 00:12:16.410 "name": "BaseBdev4", 00:12:16.410 "uuid": "93002eef-bc61-4e24-bb99-ceecb656a465", 00:12:16.410 "is_configured": true, 00:12:16.410 "data_offset": 2048, 00:12:16.410 "data_size": 63488 00:12:16.410 } 00:12:16.410 ] 00:12:16.410 }' 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.410 16:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.670 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:16.670 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.670 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.670 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.670 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.670 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.670 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.930 [2024-09-28 16:13:31.371403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.930 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.930 [2024-09-28 16:13:31.517081] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.190 [2024-09-28 16:13:31.664955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:17.190 [2024-09-28 16:13:31.665077] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.190 [2024-09-28 16:13:31.763274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.190 [2024-09-28 16:13:31.763402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.190 [2024-09-28 16:13:31.763448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.190 BaseBdev2 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.190 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.191 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.191 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.450 [ 00:12:17.450 { 00:12:17.450 "name": "BaseBdev2", 00:12:17.450 "aliases": [ 00:12:17.450 "2a23a0a8-875d-4665-99dc-d47f7dd1d55a" 00:12:17.450 ], 00:12:17.450 "product_name": "Malloc disk", 00:12:17.450 "block_size": 512, 00:12:17.450 "num_blocks": 65536, 00:12:17.450 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:17.450 "assigned_rate_limits": { 00:12:17.450 "rw_ios_per_sec": 0, 00:12:17.450 "rw_mbytes_per_sec": 0, 00:12:17.450 "r_mbytes_per_sec": 0, 00:12:17.450 "w_mbytes_per_sec": 0 00:12:17.450 }, 00:12:17.450 "claimed": false, 00:12:17.450 "zoned": false, 00:12:17.450 "supported_io_types": { 00:12:17.450 "read": true, 00:12:17.450 "write": true, 00:12:17.450 "unmap": true, 00:12:17.450 "flush": true, 00:12:17.450 "reset": true, 00:12:17.450 "nvme_admin": false, 00:12:17.450 "nvme_io": false, 00:12:17.450 "nvme_io_md": false, 00:12:17.450 "write_zeroes": true, 00:12:17.450 "zcopy": true, 00:12:17.450 "get_zone_info": false, 00:12:17.450 "zone_management": false, 00:12:17.450 "zone_append": false, 00:12:17.450 "compare": false, 00:12:17.450 "compare_and_write": false, 00:12:17.450 "abort": true, 00:12:17.450 "seek_hole": false, 00:12:17.450 "seek_data": false, 00:12:17.450 "copy": true, 00:12:17.450 "nvme_iov_md": false 00:12:17.450 }, 00:12:17.450 "memory_domains": [ 00:12:17.450 { 00:12:17.450 "dma_device_id": "system", 00:12:17.450 "dma_device_type": 1 00:12:17.450 }, 00:12:17.450 { 00:12:17.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.450 "dma_device_type": 2 00:12:17.450 } 00:12:17.450 ], 00:12:17.451 "driver_specific": {} 00:12:17.451 } 00:12:17.451 ] 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 BaseBdev3 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 [ 00:12:17.451 { 00:12:17.451 "name": "BaseBdev3", 00:12:17.451 "aliases": [ 00:12:17.451 "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3" 00:12:17.451 ], 00:12:17.451 "product_name": "Malloc disk", 00:12:17.451 "block_size": 512, 00:12:17.451 "num_blocks": 65536, 00:12:17.451 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:17.451 "assigned_rate_limits": { 00:12:17.451 "rw_ios_per_sec": 0, 00:12:17.451 "rw_mbytes_per_sec": 0, 00:12:17.451 "r_mbytes_per_sec": 0, 00:12:17.451 "w_mbytes_per_sec": 0 00:12:17.451 }, 00:12:17.451 "claimed": false, 00:12:17.451 "zoned": false, 00:12:17.451 "supported_io_types": { 00:12:17.451 "read": true, 00:12:17.451 "write": true, 00:12:17.451 "unmap": true, 00:12:17.451 "flush": true, 00:12:17.451 "reset": true, 00:12:17.451 "nvme_admin": false, 00:12:17.451 "nvme_io": false, 00:12:17.451 "nvme_io_md": false, 00:12:17.451 "write_zeroes": true, 00:12:17.451 "zcopy": true, 00:12:17.451 "get_zone_info": false, 00:12:17.451 "zone_management": false, 00:12:17.451 "zone_append": false, 00:12:17.451 "compare": false, 00:12:17.451 "compare_and_write": false, 00:12:17.451 "abort": true, 00:12:17.451 "seek_hole": false, 00:12:17.451 "seek_data": false, 00:12:17.451 "copy": true, 00:12:17.451 "nvme_iov_md": false 00:12:17.451 }, 00:12:17.451 "memory_domains": [ 00:12:17.451 { 00:12:17.451 "dma_device_id": "system", 00:12:17.451 "dma_device_type": 1 00:12:17.451 }, 00:12:17.451 { 00:12:17.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.451 "dma_device_type": 2 00:12:17.451 } 00:12:17.451 ], 00:12:17.451 "driver_specific": {} 00:12:17.451 } 00:12:17.451 ] 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 BaseBdev4 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 [ 00:12:17.451 { 00:12:17.451 "name": "BaseBdev4", 00:12:17.451 "aliases": [ 00:12:17.451 "016a64e2-7f85-4f21-a9aa-3c84670c4310" 00:12:17.451 ], 00:12:17.451 "product_name": "Malloc disk", 00:12:17.451 "block_size": 512, 00:12:17.451 "num_blocks": 65536, 00:12:17.451 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:17.451 "assigned_rate_limits": { 00:12:17.451 "rw_ios_per_sec": 0, 00:12:17.451 "rw_mbytes_per_sec": 0, 00:12:17.451 "r_mbytes_per_sec": 0, 00:12:17.451 "w_mbytes_per_sec": 0 00:12:17.451 }, 00:12:17.451 "claimed": false, 00:12:17.451 "zoned": false, 00:12:17.451 "supported_io_types": { 00:12:17.451 "read": true, 00:12:17.451 "write": true, 00:12:17.451 "unmap": true, 00:12:17.451 "flush": true, 00:12:17.451 "reset": true, 00:12:17.451 "nvme_admin": false, 00:12:17.451 "nvme_io": false, 00:12:17.451 "nvme_io_md": false, 00:12:17.451 "write_zeroes": true, 00:12:17.451 "zcopy": true, 00:12:17.451 "get_zone_info": false, 00:12:17.451 "zone_management": false, 00:12:17.451 "zone_append": false, 00:12:17.451 "compare": false, 00:12:17.451 "compare_and_write": false, 00:12:17.451 "abort": true, 00:12:17.451 "seek_hole": false, 00:12:17.451 "seek_data": false, 00:12:17.451 "copy": true, 00:12:17.451 "nvme_iov_md": false 00:12:17.451 }, 00:12:17.451 "memory_domains": [ 00:12:17.451 { 00:12:17.451 "dma_device_id": "system", 00:12:17.451 "dma_device_type": 1 00:12:17.451 }, 00:12:17.451 { 00:12:17.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.451 "dma_device_type": 2 00:12:17.451 } 00:12:17.451 ], 00:12:17.451 "driver_specific": {} 00:12:17.451 } 00:12:17.451 ] 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 [2024-09-28 16:13:32.071391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.451 [2024-09-28 16:13:32.071497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.451 [2024-09-28 16:13:32.071540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.451 [2024-09-28 16:13:32.073646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.451 [2024-09-28 16:13:32.073735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.451 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.452 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.452 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.452 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.452 "name": "Existed_Raid", 00:12:17.452 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:17.452 "strip_size_kb": 0, 00:12:17.452 "state": "configuring", 00:12:17.452 "raid_level": "raid1", 00:12:17.452 "superblock": true, 00:12:17.452 "num_base_bdevs": 4, 00:12:17.452 "num_base_bdevs_discovered": 3, 00:12:17.452 "num_base_bdevs_operational": 4, 00:12:17.452 "base_bdevs_list": [ 00:12:17.452 { 00:12:17.452 "name": "BaseBdev1", 00:12:17.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.452 "is_configured": false, 00:12:17.452 "data_offset": 0, 00:12:17.452 "data_size": 0 00:12:17.452 }, 00:12:17.452 { 00:12:17.452 "name": "BaseBdev2", 00:12:17.452 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:17.452 "is_configured": true, 00:12:17.452 "data_offset": 2048, 00:12:17.452 "data_size": 63488 00:12:17.452 }, 00:12:17.452 { 00:12:17.452 "name": "BaseBdev3", 00:12:17.452 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:17.452 "is_configured": true, 00:12:17.452 "data_offset": 2048, 00:12:17.452 "data_size": 63488 00:12:17.452 }, 00:12:17.452 { 00:12:17.452 "name": "BaseBdev4", 00:12:17.452 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:17.452 "is_configured": true, 00:12:17.452 "data_offset": 2048, 00:12:17.452 "data_size": 63488 00:12:17.452 } 00:12:17.452 ] 00:12:17.452 }' 00:12:17.452 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.452 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.020 [2024-09-28 16:13:32.530672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.020 "name": "Existed_Raid", 00:12:18.020 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:18.020 "strip_size_kb": 0, 00:12:18.020 "state": "configuring", 00:12:18.020 "raid_level": "raid1", 00:12:18.020 "superblock": true, 00:12:18.020 "num_base_bdevs": 4, 00:12:18.020 "num_base_bdevs_discovered": 2, 00:12:18.020 "num_base_bdevs_operational": 4, 00:12:18.020 "base_bdevs_list": [ 00:12:18.020 { 00:12:18.020 "name": "BaseBdev1", 00:12:18.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.020 "is_configured": false, 00:12:18.020 "data_offset": 0, 00:12:18.020 "data_size": 0 00:12:18.020 }, 00:12:18.020 { 00:12:18.020 "name": null, 00:12:18.020 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:18.020 "is_configured": false, 00:12:18.020 "data_offset": 0, 00:12:18.020 "data_size": 63488 00:12:18.020 }, 00:12:18.020 { 00:12:18.020 "name": "BaseBdev3", 00:12:18.020 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:18.020 "is_configured": true, 00:12:18.020 "data_offset": 2048, 00:12:18.020 "data_size": 63488 00:12:18.020 }, 00:12:18.020 { 00:12:18.020 "name": "BaseBdev4", 00:12:18.020 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:18.020 "is_configured": true, 00:12:18.020 "data_offset": 2048, 00:12:18.020 "data_size": 63488 00:12:18.020 } 00:12:18.020 ] 00:12:18.020 }' 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.020 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.589 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:18.589 16:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.589 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.589 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.589 16:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.589 [2024-09-28 16:13:33.055834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.589 BaseBdev1 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.589 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.590 [ 00:12:18.590 { 00:12:18.590 "name": "BaseBdev1", 00:12:18.590 "aliases": [ 00:12:18.590 "1da600ca-3858-478c-b4fd-f64392c274d5" 00:12:18.590 ], 00:12:18.590 "product_name": "Malloc disk", 00:12:18.590 "block_size": 512, 00:12:18.590 "num_blocks": 65536, 00:12:18.590 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:18.590 "assigned_rate_limits": { 00:12:18.590 "rw_ios_per_sec": 0, 00:12:18.590 "rw_mbytes_per_sec": 0, 00:12:18.590 "r_mbytes_per_sec": 0, 00:12:18.590 "w_mbytes_per_sec": 0 00:12:18.590 }, 00:12:18.590 "claimed": true, 00:12:18.590 "claim_type": "exclusive_write", 00:12:18.590 "zoned": false, 00:12:18.590 "supported_io_types": { 00:12:18.590 "read": true, 00:12:18.590 "write": true, 00:12:18.590 "unmap": true, 00:12:18.590 "flush": true, 00:12:18.590 "reset": true, 00:12:18.590 "nvme_admin": false, 00:12:18.590 "nvme_io": false, 00:12:18.590 "nvme_io_md": false, 00:12:18.590 "write_zeroes": true, 00:12:18.590 "zcopy": true, 00:12:18.590 "get_zone_info": false, 00:12:18.590 "zone_management": false, 00:12:18.590 "zone_append": false, 00:12:18.590 "compare": false, 00:12:18.590 "compare_and_write": false, 00:12:18.590 "abort": true, 00:12:18.590 "seek_hole": false, 00:12:18.590 "seek_data": false, 00:12:18.590 "copy": true, 00:12:18.590 "nvme_iov_md": false 00:12:18.590 }, 00:12:18.590 "memory_domains": [ 00:12:18.590 { 00:12:18.590 "dma_device_id": "system", 00:12:18.590 "dma_device_type": 1 00:12:18.590 }, 00:12:18.590 { 00:12:18.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.590 "dma_device_type": 2 00:12:18.590 } 00:12:18.590 ], 00:12:18.590 "driver_specific": {} 00:12:18.590 } 00:12:18.590 ] 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.590 "name": "Existed_Raid", 00:12:18.590 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:18.590 "strip_size_kb": 0, 00:12:18.590 "state": "configuring", 00:12:18.590 "raid_level": "raid1", 00:12:18.590 "superblock": true, 00:12:18.590 "num_base_bdevs": 4, 00:12:18.590 "num_base_bdevs_discovered": 3, 00:12:18.590 "num_base_bdevs_operational": 4, 00:12:18.590 "base_bdevs_list": [ 00:12:18.590 { 00:12:18.590 "name": "BaseBdev1", 00:12:18.590 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:18.590 "is_configured": true, 00:12:18.590 "data_offset": 2048, 00:12:18.590 "data_size": 63488 00:12:18.590 }, 00:12:18.590 { 00:12:18.590 "name": null, 00:12:18.590 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:18.590 "is_configured": false, 00:12:18.590 "data_offset": 0, 00:12:18.590 "data_size": 63488 00:12:18.590 }, 00:12:18.590 { 00:12:18.590 "name": "BaseBdev3", 00:12:18.590 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:18.590 "is_configured": true, 00:12:18.590 "data_offset": 2048, 00:12:18.590 "data_size": 63488 00:12:18.590 }, 00:12:18.590 { 00:12:18.590 "name": "BaseBdev4", 00:12:18.590 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:18.590 "is_configured": true, 00:12:18.590 "data_offset": 2048, 00:12:18.590 "data_size": 63488 00:12:18.590 } 00:12:18.590 ] 00:12:18.590 }' 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.590 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.159 [2024-09-28 16:13:33.587090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.159 "name": "Existed_Raid", 00:12:19.159 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:19.159 "strip_size_kb": 0, 00:12:19.159 "state": "configuring", 00:12:19.159 "raid_level": "raid1", 00:12:19.159 "superblock": true, 00:12:19.159 "num_base_bdevs": 4, 00:12:19.159 "num_base_bdevs_discovered": 2, 00:12:19.159 "num_base_bdevs_operational": 4, 00:12:19.159 "base_bdevs_list": [ 00:12:19.159 { 00:12:19.159 "name": "BaseBdev1", 00:12:19.159 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:19.159 "is_configured": true, 00:12:19.159 "data_offset": 2048, 00:12:19.159 "data_size": 63488 00:12:19.159 }, 00:12:19.159 { 00:12:19.159 "name": null, 00:12:19.159 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:19.159 "is_configured": false, 00:12:19.159 "data_offset": 0, 00:12:19.159 "data_size": 63488 00:12:19.159 }, 00:12:19.159 { 00:12:19.159 "name": null, 00:12:19.159 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:19.159 "is_configured": false, 00:12:19.159 "data_offset": 0, 00:12:19.159 "data_size": 63488 00:12:19.159 }, 00:12:19.159 { 00:12:19.159 "name": "BaseBdev4", 00:12:19.159 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:19.159 "is_configured": true, 00:12:19.159 "data_offset": 2048, 00:12:19.159 "data_size": 63488 00:12:19.159 } 00:12:19.159 ] 00:12:19.159 }' 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.159 16:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 [2024-09-28 16:13:34.098236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.420 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.679 "name": "Existed_Raid", 00:12:19.679 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:19.679 "strip_size_kb": 0, 00:12:19.679 "state": "configuring", 00:12:19.679 "raid_level": "raid1", 00:12:19.679 "superblock": true, 00:12:19.679 "num_base_bdevs": 4, 00:12:19.679 "num_base_bdevs_discovered": 3, 00:12:19.679 "num_base_bdevs_operational": 4, 00:12:19.679 "base_bdevs_list": [ 00:12:19.679 { 00:12:19.679 "name": "BaseBdev1", 00:12:19.679 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:19.679 "is_configured": true, 00:12:19.679 "data_offset": 2048, 00:12:19.679 "data_size": 63488 00:12:19.679 }, 00:12:19.679 { 00:12:19.679 "name": null, 00:12:19.679 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:19.679 "is_configured": false, 00:12:19.679 "data_offset": 0, 00:12:19.679 "data_size": 63488 00:12:19.679 }, 00:12:19.679 { 00:12:19.679 "name": "BaseBdev3", 00:12:19.679 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:19.679 "is_configured": true, 00:12:19.679 "data_offset": 2048, 00:12:19.679 "data_size": 63488 00:12:19.679 }, 00:12:19.679 { 00:12:19.679 "name": "BaseBdev4", 00:12:19.679 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:19.679 "is_configured": true, 00:12:19.679 "data_offset": 2048, 00:12:19.679 "data_size": 63488 00:12:19.679 } 00:12:19.679 ] 00:12:19.679 }' 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.679 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.939 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 [2024-09-28 16:13:34.617350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.199 "name": "Existed_Raid", 00:12:20.199 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:20.199 "strip_size_kb": 0, 00:12:20.199 "state": "configuring", 00:12:20.199 "raid_level": "raid1", 00:12:20.199 "superblock": true, 00:12:20.199 "num_base_bdevs": 4, 00:12:20.199 "num_base_bdevs_discovered": 2, 00:12:20.199 "num_base_bdevs_operational": 4, 00:12:20.199 "base_bdevs_list": [ 00:12:20.199 { 00:12:20.199 "name": null, 00:12:20.199 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:20.199 "is_configured": false, 00:12:20.199 "data_offset": 0, 00:12:20.199 "data_size": 63488 00:12:20.199 }, 00:12:20.199 { 00:12:20.199 "name": null, 00:12:20.199 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:20.199 "is_configured": false, 00:12:20.199 "data_offset": 0, 00:12:20.199 "data_size": 63488 00:12:20.199 }, 00:12:20.199 { 00:12:20.199 "name": "BaseBdev3", 00:12:20.199 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:20.199 "is_configured": true, 00:12:20.199 "data_offset": 2048, 00:12:20.199 "data_size": 63488 00:12:20.199 }, 00:12:20.199 { 00:12:20.199 "name": "BaseBdev4", 00:12:20.199 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:20.199 "is_configured": true, 00:12:20.199 "data_offset": 2048, 00:12:20.199 "data_size": 63488 00:12:20.199 } 00:12:20.199 ] 00:12:20.199 }' 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.199 16:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.768 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.769 [2024-09-28 16:13:35.211001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.769 "name": "Existed_Raid", 00:12:20.769 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:20.769 "strip_size_kb": 0, 00:12:20.769 "state": "configuring", 00:12:20.769 "raid_level": "raid1", 00:12:20.769 "superblock": true, 00:12:20.769 "num_base_bdevs": 4, 00:12:20.769 "num_base_bdevs_discovered": 3, 00:12:20.769 "num_base_bdevs_operational": 4, 00:12:20.769 "base_bdevs_list": [ 00:12:20.769 { 00:12:20.769 "name": null, 00:12:20.769 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:20.769 "is_configured": false, 00:12:20.769 "data_offset": 0, 00:12:20.769 "data_size": 63488 00:12:20.769 }, 00:12:20.769 { 00:12:20.769 "name": "BaseBdev2", 00:12:20.769 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:20.769 "is_configured": true, 00:12:20.769 "data_offset": 2048, 00:12:20.769 "data_size": 63488 00:12:20.769 }, 00:12:20.769 { 00:12:20.769 "name": "BaseBdev3", 00:12:20.769 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:20.769 "is_configured": true, 00:12:20.769 "data_offset": 2048, 00:12:20.769 "data_size": 63488 00:12:20.769 }, 00:12:20.769 { 00:12:20.769 "name": "BaseBdev4", 00:12:20.769 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:20.769 "is_configured": true, 00:12:20.769 "data_offset": 2048, 00:12:20.769 "data_size": 63488 00:12:20.769 } 00:12:20.769 ] 00:12:20.769 }' 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.769 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1da600ca-3858-478c-b4fd-f64392c274d5 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.029 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.288 [2024-09-28 16:13:35.734899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:21.288 [2024-09-28 16:13:35.735186] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.288 [2024-09-28 16:13:35.735208] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.288 [2024-09-28 16:13:35.735578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:21.288 NewBaseBdev 00:12:21.288 [2024-09-28 16:13:35.735781] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.288 [2024-09-28 16:13:35.735798] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:21.288 [2024-09-28 16:13:35.735969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.288 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.288 [ 00:12:21.288 { 00:12:21.288 "name": "NewBaseBdev", 00:12:21.288 "aliases": [ 00:12:21.288 "1da600ca-3858-478c-b4fd-f64392c274d5" 00:12:21.288 ], 00:12:21.288 "product_name": "Malloc disk", 00:12:21.288 "block_size": 512, 00:12:21.288 "num_blocks": 65536, 00:12:21.288 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:21.288 "assigned_rate_limits": { 00:12:21.288 "rw_ios_per_sec": 0, 00:12:21.288 "rw_mbytes_per_sec": 0, 00:12:21.288 "r_mbytes_per_sec": 0, 00:12:21.288 "w_mbytes_per_sec": 0 00:12:21.288 }, 00:12:21.288 "claimed": true, 00:12:21.288 "claim_type": "exclusive_write", 00:12:21.288 "zoned": false, 00:12:21.288 "supported_io_types": { 00:12:21.288 "read": true, 00:12:21.288 "write": true, 00:12:21.288 "unmap": true, 00:12:21.288 "flush": true, 00:12:21.288 "reset": true, 00:12:21.288 "nvme_admin": false, 00:12:21.288 "nvme_io": false, 00:12:21.288 "nvme_io_md": false, 00:12:21.288 "write_zeroes": true, 00:12:21.288 "zcopy": true, 00:12:21.288 "get_zone_info": false, 00:12:21.288 "zone_management": false, 00:12:21.288 "zone_append": false, 00:12:21.288 "compare": false, 00:12:21.288 "compare_and_write": false, 00:12:21.288 "abort": true, 00:12:21.288 "seek_hole": false, 00:12:21.288 "seek_data": false, 00:12:21.288 "copy": true, 00:12:21.288 "nvme_iov_md": false 00:12:21.288 }, 00:12:21.288 "memory_domains": [ 00:12:21.288 { 00:12:21.289 "dma_device_id": "system", 00:12:21.289 "dma_device_type": 1 00:12:21.289 }, 00:12:21.289 { 00:12:21.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.289 "dma_device_type": 2 00:12:21.289 } 00:12:21.289 ], 00:12:21.289 "driver_specific": {} 00:12:21.289 } 00:12:21.289 ] 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.289 "name": "Existed_Raid", 00:12:21.289 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:21.289 "strip_size_kb": 0, 00:12:21.289 "state": "online", 00:12:21.289 "raid_level": "raid1", 00:12:21.289 "superblock": true, 00:12:21.289 "num_base_bdevs": 4, 00:12:21.289 "num_base_bdevs_discovered": 4, 00:12:21.289 "num_base_bdevs_operational": 4, 00:12:21.289 "base_bdevs_list": [ 00:12:21.289 { 00:12:21.289 "name": "NewBaseBdev", 00:12:21.289 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:21.289 "is_configured": true, 00:12:21.289 "data_offset": 2048, 00:12:21.289 "data_size": 63488 00:12:21.289 }, 00:12:21.289 { 00:12:21.289 "name": "BaseBdev2", 00:12:21.289 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:21.289 "is_configured": true, 00:12:21.289 "data_offset": 2048, 00:12:21.289 "data_size": 63488 00:12:21.289 }, 00:12:21.289 { 00:12:21.289 "name": "BaseBdev3", 00:12:21.289 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:21.289 "is_configured": true, 00:12:21.289 "data_offset": 2048, 00:12:21.289 "data_size": 63488 00:12:21.289 }, 00:12:21.289 { 00:12:21.289 "name": "BaseBdev4", 00:12:21.289 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:21.289 "is_configured": true, 00:12:21.289 "data_offset": 2048, 00:12:21.289 "data_size": 63488 00:12:21.289 } 00:12:21.289 ] 00:12:21.289 }' 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.289 16:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.549 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.549 [2024-09-28 16:13:36.218377] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.809 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.809 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.809 "name": "Existed_Raid", 00:12:21.809 "aliases": [ 00:12:21.809 "dc14eef0-0601-4c3a-8562-211662f431b4" 00:12:21.809 ], 00:12:21.809 "product_name": "Raid Volume", 00:12:21.809 "block_size": 512, 00:12:21.809 "num_blocks": 63488, 00:12:21.809 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:21.809 "assigned_rate_limits": { 00:12:21.809 "rw_ios_per_sec": 0, 00:12:21.809 "rw_mbytes_per_sec": 0, 00:12:21.809 "r_mbytes_per_sec": 0, 00:12:21.809 "w_mbytes_per_sec": 0 00:12:21.809 }, 00:12:21.810 "claimed": false, 00:12:21.810 "zoned": false, 00:12:21.810 "supported_io_types": { 00:12:21.810 "read": true, 00:12:21.810 "write": true, 00:12:21.810 "unmap": false, 00:12:21.810 "flush": false, 00:12:21.810 "reset": true, 00:12:21.810 "nvme_admin": false, 00:12:21.810 "nvme_io": false, 00:12:21.810 "nvme_io_md": false, 00:12:21.810 "write_zeroes": true, 00:12:21.810 "zcopy": false, 00:12:21.810 "get_zone_info": false, 00:12:21.810 "zone_management": false, 00:12:21.810 "zone_append": false, 00:12:21.810 "compare": false, 00:12:21.810 "compare_and_write": false, 00:12:21.810 "abort": false, 00:12:21.810 "seek_hole": false, 00:12:21.810 "seek_data": false, 00:12:21.810 "copy": false, 00:12:21.810 "nvme_iov_md": false 00:12:21.810 }, 00:12:21.810 "memory_domains": [ 00:12:21.810 { 00:12:21.810 "dma_device_id": "system", 00:12:21.810 "dma_device_type": 1 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.810 "dma_device_type": 2 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "dma_device_id": "system", 00:12:21.810 "dma_device_type": 1 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.810 "dma_device_type": 2 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "dma_device_id": "system", 00:12:21.810 "dma_device_type": 1 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.810 "dma_device_type": 2 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "dma_device_id": "system", 00:12:21.810 "dma_device_type": 1 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.810 "dma_device_type": 2 00:12:21.810 } 00:12:21.810 ], 00:12:21.810 "driver_specific": { 00:12:21.810 "raid": { 00:12:21.810 "uuid": "dc14eef0-0601-4c3a-8562-211662f431b4", 00:12:21.810 "strip_size_kb": 0, 00:12:21.810 "state": "online", 00:12:21.810 "raid_level": "raid1", 00:12:21.810 "superblock": true, 00:12:21.810 "num_base_bdevs": 4, 00:12:21.810 "num_base_bdevs_discovered": 4, 00:12:21.810 "num_base_bdevs_operational": 4, 00:12:21.810 "base_bdevs_list": [ 00:12:21.810 { 00:12:21.810 "name": "NewBaseBdev", 00:12:21.810 "uuid": "1da600ca-3858-478c-b4fd-f64392c274d5", 00:12:21.810 "is_configured": true, 00:12:21.810 "data_offset": 2048, 00:12:21.810 "data_size": 63488 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "name": "BaseBdev2", 00:12:21.810 "uuid": "2a23a0a8-875d-4665-99dc-d47f7dd1d55a", 00:12:21.810 "is_configured": true, 00:12:21.810 "data_offset": 2048, 00:12:21.810 "data_size": 63488 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "name": "BaseBdev3", 00:12:21.810 "uuid": "82c9a9a5-65b5-4cfe-9cee-81d25475e3a3", 00:12:21.810 "is_configured": true, 00:12:21.810 "data_offset": 2048, 00:12:21.810 "data_size": 63488 00:12:21.810 }, 00:12:21.810 { 00:12:21.810 "name": "BaseBdev4", 00:12:21.810 "uuid": "016a64e2-7f85-4f21-a9aa-3c84670c4310", 00:12:21.810 "is_configured": true, 00:12:21.810 "data_offset": 2048, 00:12:21.810 "data_size": 63488 00:12:21.810 } 00:12:21.810 ] 00:12:21.810 } 00:12:21.810 } 00:12:21.810 }' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:21.810 BaseBdev2 00:12:21.810 BaseBdev3 00:12:21.810 BaseBdev4' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.810 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.070 [2024-09-28 16:13:36.529520] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.070 [2024-09-28 16:13:36.529544] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.070 [2024-09-28 16:13:36.529628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.070 [2024-09-28 16:13:36.529948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.070 [2024-09-28 16:13:36.529964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73889 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73889 ']' 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73889 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73889 00:12:22.070 killing process with pid 73889 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73889' 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73889 00:12:22.070 [2024-09-28 16:13:36.574732] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.070 16:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73889 00:12:22.329 [2024-09-28 16:13:36.996205] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.744 16:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:23.744 00:12:23.744 real 0m11.858s 00:12:23.744 user 0m18.393s 00:12:23.744 sys 0m2.299s 00:12:23.744 16:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.744 ************************************ 00:12:23.744 END TEST raid_state_function_test_sb 00:12:23.744 ************************************ 00:12:23.744 16:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.744 16:13:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:23.744 16:13:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:23.744 16:13:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.744 16:13:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.744 ************************************ 00:12:23.744 START TEST raid_superblock_test 00:12:23.744 ************************************ 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74566 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74566 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74566 ']' 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.744 16:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.004 [2024-09-28 16:13:38.514153] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:24.004 [2024-09-28 16:13:38.514400] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74566 ] 00:12:24.004 [2024-09-28 16:13:38.682901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.263 [2024-09-28 16:13:38.934184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.523 [2024-09-28 16:13:39.161697] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.523 [2024-09-28 16:13:39.161832] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.783 malloc1 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.783 [2024-09-28 16:13:39.400587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:24.783 [2024-09-28 16:13:39.400680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.783 [2024-09-28 16:13:39.400703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:24.783 [2024-09-28 16:13:39.400716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.783 [2024-09-28 16:13:39.403188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.783 [2024-09-28 16:13:39.403281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:24.783 pt1 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.783 malloc2 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.783 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 [2024-09-28 16:13:39.472255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.044 [2024-09-28 16:13:39.472378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.044 [2024-09-28 16:13:39.472417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:25.044 [2024-09-28 16:13:39.472444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.044 [2024-09-28 16:13:39.474768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.044 [2024-09-28 16:13:39.474839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.044 pt2 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 malloc3 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 [2024-09-28 16:13:39.536481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.044 [2024-09-28 16:13:39.536601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.044 [2024-09-28 16:13:39.536626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:25.044 [2024-09-28 16:13:39.536635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.044 [2024-09-28 16:13:39.538944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.044 [2024-09-28 16:13:39.538979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.044 pt3 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 malloc4 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 [2024-09-28 16:13:39.597993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:25.044 [2024-09-28 16:13:39.598118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.044 [2024-09-28 16:13:39.598155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:25.044 [2024-09-28 16:13:39.598183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.044 [2024-09-28 16:13:39.600562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.044 [2024-09-28 16:13:39.600633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:25.044 pt4 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 [2024-09-28 16:13:39.610039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.044 [2024-09-28 16:13:39.612113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.044 [2024-09-28 16:13:39.612220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.044 [2024-09-28 16:13:39.612305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:25.044 [2024-09-28 16:13:39.612530] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:25.044 [2024-09-28 16:13:39.612574] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.044 [2024-09-28 16:13:39.612855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.044 [2024-09-28 16:13:39.613058] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:25.044 [2024-09-28 16:13:39.613104] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:25.044 [2024-09-28 16:13:39.613299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.044 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.044 "name": "raid_bdev1", 00:12:25.044 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:25.044 "strip_size_kb": 0, 00:12:25.044 "state": "online", 00:12:25.044 "raid_level": "raid1", 00:12:25.044 "superblock": true, 00:12:25.044 "num_base_bdevs": 4, 00:12:25.044 "num_base_bdevs_discovered": 4, 00:12:25.044 "num_base_bdevs_operational": 4, 00:12:25.044 "base_bdevs_list": [ 00:12:25.044 { 00:12:25.044 "name": "pt1", 00:12:25.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.044 "is_configured": true, 00:12:25.044 "data_offset": 2048, 00:12:25.044 "data_size": 63488 00:12:25.044 }, 00:12:25.044 { 00:12:25.044 "name": "pt2", 00:12:25.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.045 "is_configured": true, 00:12:25.045 "data_offset": 2048, 00:12:25.045 "data_size": 63488 00:12:25.045 }, 00:12:25.045 { 00:12:25.045 "name": "pt3", 00:12:25.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.045 "is_configured": true, 00:12:25.045 "data_offset": 2048, 00:12:25.045 "data_size": 63488 00:12:25.045 }, 00:12:25.045 { 00:12:25.045 "name": "pt4", 00:12:25.045 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.045 "is_configured": true, 00:12:25.045 "data_offset": 2048, 00:12:25.045 "data_size": 63488 00:12:25.045 } 00:12:25.045 ] 00:12:25.045 }' 00:12:25.045 16:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.045 16:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 [2024-09-28 16:13:40.105447] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.614 "name": "raid_bdev1", 00:12:25.614 "aliases": [ 00:12:25.614 "60c2a48a-7bca-40d8-b557-e6846109f03b" 00:12:25.614 ], 00:12:25.614 "product_name": "Raid Volume", 00:12:25.614 "block_size": 512, 00:12:25.614 "num_blocks": 63488, 00:12:25.614 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:25.614 "assigned_rate_limits": { 00:12:25.614 "rw_ios_per_sec": 0, 00:12:25.614 "rw_mbytes_per_sec": 0, 00:12:25.614 "r_mbytes_per_sec": 0, 00:12:25.614 "w_mbytes_per_sec": 0 00:12:25.614 }, 00:12:25.614 "claimed": false, 00:12:25.614 "zoned": false, 00:12:25.614 "supported_io_types": { 00:12:25.614 "read": true, 00:12:25.614 "write": true, 00:12:25.614 "unmap": false, 00:12:25.614 "flush": false, 00:12:25.614 "reset": true, 00:12:25.614 "nvme_admin": false, 00:12:25.614 "nvme_io": false, 00:12:25.614 "nvme_io_md": false, 00:12:25.614 "write_zeroes": true, 00:12:25.614 "zcopy": false, 00:12:25.614 "get_zone_info": false, 00:12:25.614 "zone_management": false, 00:12:25.614 "zone_append": false, 00:12:25.614 "compare": false, 00:12:25.614 "compare_and_write": false, 00:12:25.614 "abort": false, 00:12:25.614 "seek_hole": false, 00:12:25.614 "seek_data": false, 00:12:25.614 "copy": false, 00:12:25.614 "nvme_iov_md": false 00:12:25.614 }, 00:12:25.614 "memory_domains": [ 00:12:25.614 { 00:12:25.614 "dma_device_id": "system", 00:12:25.614 "dma_device_type": 1 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.614 "dma_device_type": 2 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "dma_device_id": "system", 00:12:25.614 "dma_device_type": 1 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.614 "dma_device_type": 2 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "dma_device_id": "system", 00:12:25.614 "dma_device_type": 1 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.614 "dma_device_type": 2 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "dma_device_id": "system", 00:12:25.614 "dma_device_type": 1 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.614 "dma_device_type": 2 00:12:25.614 } 00:12:25.614 ], 00:12:25.614 "driver_specific": { 00:12:25.614 "raid": { 00:12:25.614 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:25.614 "strip_size_kb": 0, 00:12:25.614 "state": "online", 00:12:25.614 "raid_level": "raid1", 00:12:25.614 "superblock": true, 00:12:25.614 "num_base_bdevs": 4, 00:12:25.614 "num_base_bdevs_discovered": 4, 00:12:25.614 "num_base_bdevs_operational": 4, 00:12:25.614 "base_bdevs_list": [ 00:12:25.614 { 00:12:25.614 "name": "pt1", 00:12:25.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.614 "is_configured": true, 00:12:25.614 "data_offset": 2048, 00:12:25.614 "data_size": 63488 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "name": "pt2", 00:12:25.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.614 "is_configured": true, 00:12:25.614 "data_offset": 2048, 00:12:25.614 "data_size": 63488 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "name": "pt3", 00:12:25.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.614 "is_configured": true, 00:12:25.614 "data_offset": 2048, 00:12:25.614 "data_size": 63488 00:12:25.614 }, 00:12:25.614 { 00:12:25.614 "name": "pt4", 00:12:25.614 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.614 "is_configured": true, 00:12:25.614 "data_offset": 2048, 00:12:25.614 "data_size": 63488 00:12:25.614 } 00:12:25.614 ] 00:12:25.614 } 00:12:25.614 } 00:12:25.614 }' 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:25.614 pt2 00:12:25.614 pt3 00:12:25.614 pt4' 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.614 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.615 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 [2024-09-28 16:13:40.392869] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=60c2a48a-7bca-40d8-b557-e6846109f03b 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 60c2a48a-7bca-40d8-b557-e6846109f03b ']' 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 [2024-09-28 16:13:40.424546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.875 [2024-09-28 16:13:40.424568] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.875 [2024-09-28 16:13:40.424637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.875 [2024-09-28 16:13:40.424717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.875 [2024-09-28 16:13:40.424733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.141 [2024-09-28 16:13:40.584303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:26.141 [2024-09-28 16:13:40.586384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:26.141 [2024-09-28 16:13:40.586424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:26.141 [2024-09-28 16:13:40.586456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:26.141 [2024-09-28 16:13:40.586503] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:26.141 [2024-09-28 16:13:40.586550] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:26.141 [2024-09-28 16:13:40.586568] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:26.141 [2024-09-28 16:13:40.586585] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:26.141 [2024-09-28 16:13:40.586597] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.141 [2024-09-28 16:13:40.586607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:26.141 request: 00:12:26.141 { 00:12:26.141 "name": "raid_bdev1", 00:12:26.141 "raid_level": "raid1", 00:12:26.141 "base_bdevs": [ 00:12:26.141 "malloc1", 00:12:26.141 "malloc2", 00:12:26.141 "malloc3", 00:12:26.141 "malloc4" 00:12:26.141 ], 00:12:26.141 "superblock": false, 00:12:26.141 "method": "bdev_raid_create", 00:12:26.141 "req_id": 1 00:12:26.141 } 00:12:26.141 Got JSON-RPC error response 00:12:26.141 response: 00:12:26.141 { 00:12:26.141 "code": -17, 00:12:26.141 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:26.141 } 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.141 [2024-09-28 16:13:40.648167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.141 [2024-09-28 16:13:40.648271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.141 [2024-09-28 16:13:40.648304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.141 [2024-09-28 16:13:40.648334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.141 [2024-09-28 16:13:40.650707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.141 [2024-09-28 16:13:40.650794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.141 [2024-09-28 16:13:40.650882] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:26.141 [2024-09-28 16:13:40.650966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.141 pt1 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.141 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.142 "name": "raid_bdev1", 00:12:26.142 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:26.142 "strip_size_kb": 0, 00:12:26.142 "state": "configuring", 00:12:26.142 "raid_level": "raid1", 00:12:26.142 "superblock": true, 00:12:26.142 "num_base_bdevs": 4, 00:12:26.142 "num_base_bdevs_discovered": 1, 00:12:26.142 "num_base_bdevs_operational": 4, 00:12:26.142 "base_bdevs_list": [ 00:12:26.142 { 00:12:26.142 "name": "pt1", 00:12:26.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.142 "is_configured": true, 00:12:26.142 "data_offset": 2048, 00:12:26.142 "data_size": 63488 00:12:26.142 }, 00:12:26.142 { 00:12:26.142 "name": null, 00:12:26.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.142 "is_configured": false, 00:12:26.142 "data_offset": 2048, 00:12:26.142 "data_size": 63488 00:12:26.142 }, 00:12:26.142 { 00:12:26.142 "name": null, 00:12:26.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.142 "is_configured": false, 00:12:26.142 "data_offset": 2048, 00:12:26.142 "data_size": 63488 00:12:26.142 }, 00:12:26.142 { 00:12:26.142 "name": null, 00:12:26.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.142 "is_configured": false, 00:12:26.142 "data_offset": 2048, 00:12:26.142 "data_size": 63488 00:12:26.142 } 00:12:26.142 ] 00:12:26.142 }' 00:12:26.142 16:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.142 16:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.711 [2024-09-28 16:13:41.155300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.711 [2024-09-28 16:13:41.155354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.711 [2024-09-28 16:13:41.155388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:26.711 [2024-09-28 16:13:41.155398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.711 [2024-09-28 16:13:41.155840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.711 [2024-09-28 16:13:41.155860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.711 [2024-09-28 16:13:41.155931] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:26.711 [2024-09-28 16:13:41.155964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.711 pt2 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.711 [2024-09-28 16:13:41.167307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.711 "name": "raid_bdev1", 00:12:26.711 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:26.711 "strip_size_kb": 0, 00:12:26.711 "state": "configuring", 00:12:26.711 "raid_level": "raid1", 00:12:26.711 "superblock": true, 00:12:26.711 "num_base_bdevs": 4, 00:12:26.711 "num_base_bdevs_discovered": 1, 00:12:26.711 "num_base_bdevs_operational": 4, 00:12:26.711 "base_bdevs_list": [ 00:12:26.711 { 00:12:26.711 "name": "pt1", 00:12:26.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.711 "is_configured": true, 00:12:26.711 "data_offset": 2048, 00:12:26.711 "data_size": 63488 00:12:26.711 }, 00:12:26.711 { 00:12:26.711 "name": null, 00:12:26.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.711 "is_configured": false, 00:12:26.711 "data_offset": 0, 00:12:26.711 "data_size": 63488 00:12:26.711 }, 00:12:26.711 { 00:12:26.711 "name": null, 00:12:26.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.711 "is_configured": false, 00:12:26.711 "data_offset": 2048, 00:12:26.711 "data_size": 63488 00:12:26.711 }, 00:12:26.711 { 00:12:26.711 "name": null, 00:12:26.711 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.711 "is_configured": false, 00:12:26.711 "data_offset": 2048, 00:12:26.711 "data_size": 63488 00:12:26.711 } 00:12:26.711 ] 00:12:26.711 }' 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.711 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.972 [2024-09-28 16:13:41.578596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.972 [2024-09-28 16:13:41.578691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.972 [2024-09-28 16:13:41.578757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:26.972 [2024-09-28 16:13:41.578790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.972 [2024-09-28 16:13:41.579254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.972 [2024-09-28 16:13:41.579316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.972 [2024-09-28 16:13:41.579420] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:26.972 [2024-09-28 16:13:41.579475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.972 pt2 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.972 [2024-09-28 16:13:41.590565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.972 [2024-09-28 16:13:41.590651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.972 [2024-09-28 16:13:41.590682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:26.972 [2024-09-28 16:13:41.590709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.972 [2024-09-28 16:13:41.591102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.972 [2024-09-28 16:13:41.591160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.972 [2024-09-28 16:13:41.591257] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:26.972 [2024-09-28 16:13:41.591302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.972 pt3 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.972 [2024-09-28 16:13:41.602516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:26.972 [2024-09-28 16:13:41.602556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.972 [2024-09-28 16:13:41.602571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:26.972 [2024-09-28 16:13:41.602579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.972 [2024-09-28 16:13:41.602940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.972 [2024-09-28 16:13:41.602969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:26.972 [2024-09-28 16:13:41.603022] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:26.972 [2024-09-28 16:13:41.603048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:26.972 [2024-09-28 16:13:41.603188] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:26.972 [2024-09-28 16:13:41.603201] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.972 [2024-09-28 16:13:41.603470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:26.972 [2024-09-28 16:13:41.603633] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:26.972 [2024-09-28 16:13:41.603646] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:26.972 [2024-09-28 16:13:41.603763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.972 pt4 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.972 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.232 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.232 "name": "raid_bdev1", 00:12:27.232 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:27.232 "strip_size_kb": 0, 00:12:27.232 "state": "online", 00:12:27.232 "raid_level": "raid1", 00:12:27.232 "superblock": true, 00:12:27.232 "num_base_bdevs": 4, 00:12:27.232 "num_base_bdevs_discovered": 4, 00:12:27.232 "num_base_bdevs_operational": 4, 00:12:27.232 "base_bdevs_list": [ 00:12:27.232 { 00:12:27.232 "name": "pt1", 00:12:27.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.232 "is_configured": true, 00:12:27.232 "data_offset": 2048, 00:12:27.232 "data_size": 63488 00:12:27.232 }, 00:12:27.232 { 00:12:27.232 "name": "pt2", 00:12:27.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.232 "is_configured": true, 00:12:27.232 "data_offset": 2048, 00:12:27.232 "data_size": 63488 00:12:27.232 }, 00:12:27.232 { 00:12:27.232 "name": "pt3", 00:12:27.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.232 "is_configured": true, 00:12:27.232 "data_offset": 2048, 00:12:27.232 "data_size": 63488 00:12:27.232 }, 00:12:27.232 { 00:12:27.232 "name": "pt4", 00:12:27.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.232 "is_configured": true, 00:12:27.232 "data_offset": 2048, 00:12:27.232 "data_size": 63488 00:12:27.232 } 00:12:27.232 ] 00:12:27.232 }' 00:12:27.232 16:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.232 16:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.492 [2024-09-28 16:13:42.086022] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.492 "name": "raid_bdev1", 00:12:27.492 "aliases": [ 00:12:27.492 "60c2a48a-7bca-40d8-b557-e6846109f03b" 00:12:27.492 ], 00:12:27.492 "product_name": "Raid Volume", 00:12:27.492 "block_size": 512, 00:12:27.492 "num_blocks": 63488, 00:12:27.492 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:27.492 "assigned_rate_limits": { 00:12:27.492 "rw_ios_per_sec": 0, 00:12:27.492 "rw_mbytes_per_sec": 0, 00:12:27.492 "r_mbytes_per_sec": 0, 00:12:27.492 "w_mbytes_per_sec": 0 00:12:27.492 }, 00:12:27.492 "claimed": false, 00:12:27.492 "zoned": false, 00:12:27.492 "supported_io_types": { 00:12:27.492 "read": true, 00:12:27.492 "write": true, 00:12:27.492 "unmap": false, 00:12:27.492 "flush": false, 00:12:27.492 "reset": true, 00:12:27.492 "nvme_admin": false, 00:12:27.492 "nvme_io": false, 00:12:27.492 "nvme_io_md": false, 00:12:27.492 "write_zeroes": true, 00:12:27.492 "zcopy": false, 00:12:27.492 "get_zone_info": false, 00:12:27.492 "zone_management": false, 00:12:27.492 "zone_append": false, 00:12:27.492 "compare": false, 00:12:27.492 "compare_and_write": false, 00:12:27.492 "abort": false, 00:12:27.492 "seek_hole": false, 00:12:27.492 "seek_data": false, 00:12:27.492 "copy": false, 00:12:27.492 "nvme_iov_md": false 00:12:27.492 }, 00:12:27.492 "memory_domains": [ 00:12:27.492 { 00:12:27.492 "dma_device_id": "system", 00:12:27.492 "dma_device_type": 1 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.492 "dma_device_type": 2 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "dma_device_id": "system", 00:12:27.492 "dma_device_type": 1 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.492 "dma_device_type": 2 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "dma_device_id": "system", 00:12:27.492 "dma_device_type": 1 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.492 "dma_device_type": 2 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "dma_device_id": "system", 00:12:27.492 "dma_device_type": 1 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.492 "dma_device_type": 2 00:12:27.492 } 00:12:27.492 ], 00:12:27.492 "driver_specific": { 00:12:27.492 "raid": { 00:12:27.492 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:27.492 "strip_size_kb": 0, 00:12:27.492 "state": "online", 00:12:27.492 "raid_level": "raid1", 00:12:27.492 "superblock": true, 00:12:27.492 "num_base_bdevs": 4, 00:12:27.492 "num_base_bdevs_discovered": 4, 00:12:27.492 "num_base_bdevs_operational": 4, 00:12:27.492 "base_bdevs_list": [ 00:12:27.492 { 00:12:27.492 "name": "pt1", 00:12:27.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.492 "is_configured": true, 00:12:27.492 "data_offset": 2048, 00:12:27.492 "data_size": 63488 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "name": "pt2", 00:12:27.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.492 "is_configured": true, 00:12:27.492 "data_offset": 2048, 00:12:27.492 "data_size": 63488 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "name": "pt3", 00:12:27.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.492 "is_configured": true, 00:12:27.492 "data_offset": 2048, 00:12:27.492 "data_size": 63488 00:12:27.492 }, 00:12:27.492 { 00:12:27.492 "name": "pt4", 00:12:27.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.492 "is_configured": true, 00:12:27.492 "data_offset": 2048, 00:12:27.492 "data_size": 63488 00:12:27.492 } 00:12:27.492 ] 00:12:27.492 } 00:12:27.492 } 00:12:27.492 }' 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.492 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.492 pt2 00:12:27.492 pt3 00:12:27.492 pt4' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.751 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.010 [2024-09-28 16:13:42.441385] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 60c2a48a-7bca-40d8-b557-e6846109f03b '!=' 60c2a48a-7bca-40d8-b557-e6846109f03b ']' 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.010 [2024-09-28 16:13:42.485060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.010 "name": "raid_bdev1", 00:12:28.010 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:28.010 "strip_size_kb": 0, 00:12:28.010 "state": "online", 00:12:28.010 "raid_level": "raid1", 00:12:28.010 "superblock": true, 00:12:28.010 "num_base_bdevs": 4, 00:12:28.010 "num_base_bdevs_discovered": 3, 00:12:28.010 "num_base_bdevs_operational": 3, 00:12:28.010 "base_bdevs_list": [ 00:12:28.010 { 00:12:28.010 "name": null, 00:12:28.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.010 "is_configured": false, 00:12:28.010 "data_offset": 0, 00:12:28.010 "data_size": 63488 00:12:28.010 }, 00:12:28.010 { 00:12:28.010 "name": "pt2", 00:12:28.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.010 "is_configured": true, 00:12:28.010 "data_offset": 2048, 00:12:28.010 "data_size": 63488 00:12:28.010 }, 00:12:28.010 { 00:12:28.010 "name": "pt3", 00:12:28.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.010 "is_configured": true, 00:12:28.010 "data_offset": 2048, 00:12:28.010 "data_size": 63488 00:12:28.010 }, 00:12:28.010 { 00:12:28.010 "name": "pt4", 00:12:28.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.010 "is_configured": true, 00:12:28.010 "data_offset": 2048, 00:12:28.010 "data_size": 63488 00:12:28.010 } 00:12:28.010 ] 00:12:28.010 }' 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.010 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.270 [2024-09-28 16:13:42.936295] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.270 [2024-09-28 16:13:42.936374] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.270 [2024-09-28 16:13:42.936452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.270 [2024-09-28 16:13:42.936559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.270 [2024-09-28 16:13:42.936603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.270 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.529 16:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.529 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 [2024-09-28 16:13:43.036132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.530 [2024-09-28 16:13:43.036243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.530 [2024-09-28 16:13:43.036265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:28.530 [2024-09-28 16:13:43.036273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.530 [2024-09-28 16:13:43.038761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.530 [2024-09-28 16:13:43.038796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.530 [2024-09-28 16:13:43.038870] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.530 [2024-09-28 16:13:43.038938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.530 pt2 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.530 "name": "raid_bdev1", 00:12:28.530 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:28.530 "strip_size_kb": 0, 00:12:28.530 "state": "configuring", 00:12:28.530 "raid_level": "raid1", 00:12:28.530 "superblock": true, 00:12:28.530 "num_base_bdevs": 4, 00:12:28.530 "num_base_bdevs_discovered": 1, 00:12:28.530 "num_base_bdevs_operational": 3, 00:12:28.530 "base_bdevs_list": [ 00:12:28.530 { 00:12:28.530 "name": null, 00:12:28.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.530 "is_configured": false, 00:12:28.530 "data_offset": 2048, 00:12:28.530 "data_size": 63488 00:12:28.530 }, 00:12:28.530 { 00:12:28.530 "name": "pt2", 00:12:28.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.530 "is_configured": true, 00:12:28.530 "data_offset": 2048, 00:12:28.530 "data_size": 63488 00:12:28.530 }, 00:12:28.530 { 00:12:28.530 "name": null, 00:12:28.530 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.530 "is_configured": false, 00:12:28.530 "data_offset": 2048, 00:12:28.530 "data_size": 63488 00:12:28.530 }, 00:12:28.530 { 00:12:28.530 "name": null, 00:12:28.530 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.530 "is_configured": false, 00:12:28.530 "data_offset": 2048, 00:12:28.530 "data_size": 63488 00:12:28.530 } 00:12:28.530 ] 00:12:28.530 }' 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.530 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.099 [2024-09-28 16:13:43.483378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:29.099 [2024-09-28 16:13:43.483480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.099 [2024-09-28 16:13:43.483517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:29.099 [2024-09-28 16:13:43.483544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.099 [2024-09-28 16:13:43.484053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.099 [2024-09-28 16:13:43.484112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:29.099 [2024-09-28 16:13:43.484238] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:29.099 [2024-09-28 16:13:43.484299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.099 pt3 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.099 "name": "raid_bdev1", 00:12:29.099 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:29.099 "strip_size_kb": 0, 00:12:29.099 "state": "configuring", 00:12:29.099 "raid_level": "raid1", 00:12:29.099 "superblock": true, 00:12:29.099 "num_base_bdevs": 4, 00:12:29.099 "num_base_bdevs_discovered": 2, 00:12:29.099 "num_base_bdevs_operational": 3, 00:12:29.099 "base_bdevs_list": [ 00:12:29.099 { 00:12:29.099 "name": null, 00:12:29.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.099 "is_configured": false, 00:12:29.099 "data_offset": 2048, 00:12:29.099 "data_size": 63488 00:12:29.099 }, 00:12:29.099 { 00:12:29.099 "name": "pt2", 00:12:29.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.099 "is_configured": true, 00:12:29.099 "data_offset": 2048, 00:12:29.099 "data_size": 63488 00:12:29.099 }, 00:12:29.099 { 00:12:29.099 "name": "pt3", 00:12:29.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.099 "is_configured": true, 00:12:29.099 "data_offset": 2048, 00:12:29.099 "data_size": 63488 00:12:29.099 }, 00:12:29.099 { 00:12:29.099 "name": null, 00:12:29.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.099 "is_configured": false, 00:12:29.099 "data_offset": 2048, 00:12:29.099 "data_size": 63488 00:12:29.099 } 00:12:29.099 ] 00:12:29.099 }' 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.099 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.368 [2024-09-28 16:13:43.922628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:29.368 [2024-09-28 16:13:43.922681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.368 [2024-09-28 16:13:43.922720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:29.368 [2024-09-28 16:13:43.922728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.368 [2024-09-28 16:13:43.923173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.368 [2024-09-28 16:13:43.923190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:29.368 [2024-09-28 16:13:43.923276] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:29.368 [2024-09-28 16:13:43.923305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:29.368 [2024-09-28 16:13:43.923449] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:29.368 [2024-09-28 16:13:43.923462] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.368 [2024-09-28 16:13:43.923713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:29.368 [2024-09-28 16:13:43.923874] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:29.368 [2024-09-28 16:13:43.923888] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:29.368 [2024-09-28 16:13:43.924033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.368 pt4 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.368 "name": "raid_bdev1", 00:12:29.368 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:29.368 "strip_size_kb": 0, 00:12:29.368 "state": "online", 00:12:29.368 "raid_level": "raid1", 00:12:29.368 "superblock": true, 00:12:29.368 "num_base_bdevs": 4, 00:12:29.368 "num_base_bdevs_discovered": 3, 00:12:29.368 "num_base_bdevs_operational": 3, 00:12:29.368 "base_bdevs_list": [ 00:12:29.368 { 00:12:29.368 "name": null, 00:12:29.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.368 "is_configured": false, 00:12:29.368 "data_offset": 2048, 00:12:29.368 "data_size": 63488 00:12:29.368 }, 00:12:29.368 { 00:12:29.368 "name": "pt2", 00:12:29.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.368 "is_configured": true, 00:12:29.368 "data_offset": 2048, 00:12:29.368 "data_size": 63488 00:12:29.368 }, 00:12:29.368 { 00:12:29.368 "name": "pt3", 00:12:29.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.368 "is_configured": true, 00:12:29.368 "data_offset": 2048, 00:12:29.368 "data_size": 63488 00:12:29.368 }, 00:12:29.368 { 00:12:29.368 "name": "pt4", 00:12:29.368 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.368 "is_configured": true, 00:12:29.368 "data_offset": 2048, 00:12:29.368 "data_size": 63488 00:12:29.368 } 00:12:29.368 ] 00:12:29.368 }' 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.368 16:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.628 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.628 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.628 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.888 [2024-09-28 16:13:44.317924] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.888 [2024-09-28 16:13:44.318003] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.888 [2024-09-28 16:13:44.318091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.888 [2024-09-28 16:13:44.318214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.888 [2024-09-28 16:13:44.318297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.888 [2024-09-28 16:13:44.389819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:29.888 [2024-09-28 16:13:44.389918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.888 [2024-09-28 16:13:44.389961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:29.888 [2024-09-28 16:13:44.389993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.888 [2024-09-28 16:13:44.392451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.888 [2024-09-28 16:13:44.392535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:29.888 [2024-09-28 16:13:44.392649] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:29.888 [2024-09-28 16:13:44.392726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:29.888 [2024-09-28 16:13:44.392886] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:29.888 [2024-09-28 16:13:44.392942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.888 [2024-09-28 16:13:44.392989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:29.888 [2024-09-28 16:13:44.393087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.888 [2024-09-28 16:13:44.393234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.888 pt1 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.888 "name": "raid_bdev1", 00:12:29.888 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:29.888 "strip_size_kb": 0, 00:12:29.888 "state": "configuring", 00:12:29.888 "raid_level": "raid1", 00:12:29.888 "superblock": true, 00:12:29.888 "num_base_bdevs": 4, 00:12:29.888 "num_base_bdevs_discovered": 2, 00:12:29.888 "num_base_bdevs_operational": 3, 00:12:29.888 "base_bdevs_list": [ 00:12:29.888 { 00:12:29.888 "name": null, 00:12:29.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.888 "is_configured": false, 00:12:29.888 "data_offset": 2048, 00:12:29.888 "data_size": 63488 00:12:29.888 }, 00:12:29.888 { 00:12:29.888 "name": "pt2", 00:12:29.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.888 "is_configured": true, 00:12:29.888 "data_offset": 2048, 00:12:29.888 "data_size": 63488 00:12:29.888 }, 00:12:29.888 { 00:12:29.888 "name": "pt3", 00:12:29.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.888 "is_configured": true, 00:12:29.888 "data_offset": 2048, 00:12:29.888 "data_size": 63488 00:12:29.888 }, 00:12:29.888 { 00:12:29.888 "name": null, 00:12:29.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.888 "is_configured": false, 00:12:29.888 "data_offset": 2048, 00:12:29.888 "data_size": 63488 00:12:29.888 } 00:12:29.888 ] 00:12:29.888 }' 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.888 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.491 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.491 [2024-09-28 16:13:44.872988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:30.491 [2024-09-28 16:13:44.873082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.491 [2024-09-28 16:13:44.873119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:30.491 [2024-09-28 16:13:44.873144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.491 [2024-09-28 16:13:44.873597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.492 [2024-09-28 16:13:44.873650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:30.492 [2024-09-28 16:13:44.873743] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:30.492 [2024-09-28 16:13:44.873787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:30.492 [2024-09-28 16:13:44.873947] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:30.492 [2024-09-28 16:13:44.873981] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.492 [2024-09-28 16:13:44.874267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:30.492 [2024-09-28 16:13:44.874457] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:30.492 [2024-09-28 16:13:44.874498] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:30.492 [2024-09-28 16:13:44.874673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.492 pt4 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.492 "name": "raid_bdev1", 00:12:30.492 "uuid": "60c2a48a-7bca-40d8-b557-e6846109f03b", 00:12:30.492 "strip_size_kb": 0, 00:12:30.492 "state": "online", 00:12:30.492 "raid_level": "raid1", 00:12:30.492 "superblock": true, 00:12:30.492 "num_base_bdevs": 4, 00:12:30.492 "num_base_bdevs_discovered": 3, 00:12:30.492 "num_base_bdevs_operational": 3, 00:12:30.492 "base_bdevs_list": [ 00:12:30.492 { 00:12:30.492 "name": null, 00:12:30.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.492 "is_configured": false, 00:12:30.492 "data_offset": 2048, 00:12:30.492 "data_size": 63488 00:12:30.492 }, 00:12:30.492 { 00:12:30.492 "name": "pt2", 00:12:30.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.492 "is_configured": true, 00:12:30.492 "data_offset": 2048, 00:12:30.492 "data_size": 63488 00:12:30.492 }, 00:12:30.492 { 00:12:30.492 "name": "pt3", 00:12:30.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.492 "is_configured": true, 00:12:30.492 "data_offset": 2048, 00:12:30.492 "data_size": 63488 00:12:30.492 }, 00:12:30.492 { 00:12:30.492 "name": "pt4", 00:12:30.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.492 "is_configured": true, 00:12:30.492 "data_offset": 2048, 00:12:30.492 "data_size": 63488 00:12:30.492 } 00:12:30.492 ] 00:12:30.492 }' 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.492 16:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.751 [2024-09-28 16:13:45.348436] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 60c2a48a-7bca-40d8-b557-e6846109f03b '!=' 60c2a48a-7bca-40d8-b557-e6846109f03b ']' 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74566 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74566 ']' 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74566 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74566 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74566' 00:12:30.751 killing process with pid 74566 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74566 00:12:30.751 [2024-09-28 16:13:45.418862] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.751 [2024-09-28 16:13:45.418950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.751 [2024-09-28 16:13:45.419024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.751 [2024-09-28 16:13:45.419040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:30.751 16:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74566 00:12:31.319 [2024-09-28 16:13:45.825376] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.698 16:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:32.698 00:12:32.698 real 0m8.719s 00:12:32.698 user 0m13.474s 00:12:32.698 sys 0m1.656s 00:12:32.698 16:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.698 16:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.698 ************************************ 00:12:32.698 END TEST raid_superblock_test 00:12:32.698 ************************************ 00:12:32.698 16:13:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:32.698 16:13:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:32.698 16:13:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.698 16:13:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.698 ************************************ 00:12:32.698 START TEST raid_read_error_test 00:12:32.698 ************************************ 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YCC7abQpv6 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75053 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75053 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75053 ']' 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.698 16:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:32.698 [2024-09-28 16:13:47.313295] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:32.698 [2024-09-28 16:13:47.313425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75053 ] 00:12:32.958 [2024-09-28 16:13:47.482439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.218 [2024-09-28 16:13:47.732193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.478 [2024-09-28 16:13:47.965361] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.478 [2024-09-28 16:13:47.965395] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.478 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.478 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:33.478 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.478 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.478 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.478 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 BaseBdev1_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 true 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 [2024-09-28 16:13:48.206507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.738 [2024-09-28 16:13:48.206568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.738 [2024-09-28 16:13:48.206602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:33.738 [2024-09-28 16:13:48.206613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.738 [2024-09-28 16:13:48.208983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.738 [2024-09-28 16:13:48.209020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.738 BaseBdev1 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 BaseBdev2_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 true 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 [2024-09-28 16:13:48.307188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.738 [2024-09-28 16:13:48.307267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.738 [2024-09-28 16:13:48.307284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:33.738 [2024-09-28 16:13:48.307295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.738 [2024-09-28 16:13:48.309612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.738 [2024-09-28 16:13:48.309648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.738 BaseBdev2 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 BaseBdev3_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 true 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 [2024-09-28 16:13:48.378847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:33.738 [2024-09-28 16:13:48.378900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.738 [2024-09-28 16:13:48.378939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:33.738 [2024-09-28 16:13:48.378950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.738 [2024-09-28 16:13:48.381300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.738 [2024-09-28 16:13:48.381337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.738 BaseBdev3 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.738 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.739 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:33.739 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.739 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.998 BaseBdev4_malloc 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.998 true 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.998 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.998 [2024-09-28 16:13:48.447178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:33.998 [2024-09-28 16:13:48.447240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.998 [2024-09-28 16:13:48.447258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:33.999 [2024-09-28 16:13:48.447271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.999 [2024-09-28 16:13:48.449599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.999 [2024-09-28 16:13:48.449634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:33.999 BaseBdev4 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.999 [2024-09-28 16:13:48.459256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.999 [2024-09-28 16:13:48.461315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.999 [2024-09-28 16:13:48.461390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.999 [2024-09-28 16:13:48.461446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.999 [2024-09-28 16:13:48.461705] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:33.999 [2024-09-28 16:13:48.461726] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.999 [2024-09-28 16:13:48.461961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:33.999 [2024-09-28 16:13:48.462135] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:33.999 [2024-09-28 16:13:48.462151] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:33.999 [2024-09-28 16:13:48.462319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.999 "name": "raid_bdev1", 00:12:33.999 "uuid": "9bb03988-de0b-4e89-a2da-e1951d365c00", 00:12:33.999 "strip_size_kb": 0, 00:12:33.999 "state": "online", 00:12:33.999 "raid_level": "raid1", 00:12:33.999 "superblock": true, 00:12:33.999 "num_base_bdevs": 4, 00:12:33.999 "num_base_bdevs_discovered": 4, 00:12:33.999 "num_base_bdevs_operational": 4, 00:12:33.999 "base_bdevs_list": [ 00:12:33.999 { 00:12:33.999 "name": "BaseBdev1", 00:12:33.999 "uuid": "6153d0b3-2521-5d99-96d7-73297d243a76", 00:12:33.999 "is_configured": true, 00:12:33.999 "data_offset": 2048, 00:12:33.999 "data_size": 63488 00:12:33.999 }, 00:12:33.999 { 00:12:33.999 "name": "BaseBdev2", 00:12:33.999 "uuid": "4d9f3695-4e23-56fd-b776-fda095707315", 00:12:33.999 "is_configured": true, 00:12:33.999 "data_offset": 2048, 00:12:33.999 "data_size": 63488 00:12:33.999 }, 00:12:33.999 { 00:12:33.999 "name": "BaseBdev3", 00:12:33.999 "uuid": "664b060b-12a5-54b0-8a99-f83cb01ed08d", 00:12:33.999 "is_configured": true, 00:12:33.999 "data_offset": 2048, 00:12:33.999 "data_size": 63488 00:12:33.999 }, 00:12:33.999 { 00:12:33.999 "name": "BaseBdev4", 00:12:33.999 "uuid": "89b6dbc7-5287-54ee-9345-e74ac7cf64a4", 00:12:33.999 "is_configured": true, 00:12:33.999 "data_offset": 2048, 00:12:33.999 "data_size": 63488 00:12:33.999 } 00:12:33.999 ] 00:12:33.999 }' 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.999 16:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.259 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.259 16:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:34.518 [2024-09-28 16:13:48.995706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.460 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.461 "name": "raid_bdev1", 00:12:35.461 "uuid": "9bb03988-de0b-4e89-a2da-e1951d365c00", 00:12:35.461 "strip_size_kb": 0, 00:12:35.461 "state": "online", 00:12:35.461 "raid_level": "raid1", 00:12:35.461 "superblock": true, 00:12:35.461 "num_base_bdevs": 4, 00:12:35.461 "num_base_bdevs_discovered": 4, 00:12:35.461 "num_base_bdevs_operational": 4, 00:12:35.461 "base_bdevs_list": [ 00:12:35.461 { 00:12:35.461 "name": "BaseBdev1", 00:12:35.461 "uuid": "6153d0b3-2521-5d99-96d7-73297d243a76", 00:12:35.461 "is_configured": true, 00:12:35.461 "data_offset": 2048, 00:12:35.461 "data_size": 63488 00:12:35.461 }, 00:12:35.461 { 00:12:35.461 "name": "BaseBdev2", 00:12:35.461 "uuid": "4d9f3695-4e23-56fd-b776-fda095707315", 00:12:35.461 "is_configured": true, 00:12:35.461 "data_offset": 2048, 00:12:35.461 "data_size": 63488 00:12:35.461 }, 00:12:35.461 { 00:12:35.461 "name": "BaseBdev3", 00:12:35.461 "uuid": "664b060b-12a5-54b0-8a99-f83cb01ed08d", 00:12:35.461 "is_configured": true, 00:12:35.461 "data_offset": 2048, 00:12:35.461 "data_size": 63488 00:12:35.461 }, 00:12:35.461 { 00:12:35.461 "name": "BaseBdev4", 00:12:35.461 "uuid": "89b6dbc7-5287-54ee-9345-e74ac7cf64a4", 00:12:35.461 "is_configured": true, 00:12:35.461 "data_offset": 2048, 00:12:35.461 "data_size": 63488 00:12:35.461 } 00:12:35.461 ] 00:12:35.461 }' 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.461 16:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.731 [2024-09-28 16:13:50.348155] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.731 [2024-09-28 16:13:50.348198] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.731 [2024-09-28 16:13:50.350830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.731 [2024-09-28 16:13:50.350899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.731 [2024-09-28 16:13:50.351048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.731 [2024-09-28 16:13:50.351063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:35.731 { 00:12:35.731 "results": [ 00:12:35.731 { 00:12:35.731 "job": "raid_bdev1", 00:12:35.731 "core_mask": "0x1", 00:12:35.731 "workload": "randrw", 00:12:35.731 "percentage": 50, 00:12:35.731 "status": "finished", 00:12:35.731 "queue_depth": 1, 00:12:35.731 "io_size": 131072, 00:12:35.731 "runtime": 1.352899, 00:12:35.731 "iops": 8200.168674823472, 00:12:35.731 "mibps": 1025.021084352934, 00:12:35.731 "io_failed": 0, 00:12:35.731 "io_timeout": 0, 00:12:35.731 "avg_latency_us": 119.50434831212118, 00:12:35.731 "min_latency_us": 22.581659388646287, 00:12:35.731 "max_latency_us": 1523.926637554585 00:12:35.731 } 00:12:35.731 ], 00:12:35.731 "core_count": 1 00:12:35.731 } 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75053 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75053 ']' 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75053 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75053 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.731 killing process with pid 75053 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75053' 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75053 00:12:35.731 [2024-09-28 16:13:50.397947] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.731 16:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75053 00:12:36.308 [2024-09-28 16:13:50.734728] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YCC7abQpv6 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:37.689 00:12:37.689 real 0m4.919s 00:12:37.689 user 0m5.562s 00:12:37.689 sys 0m0.749s 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.689 16:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.689 ************************************ 00:12:37.689 END TEST raid_read_error_test 00:12:37.689 ************************************ 00:12:37.689 16:13:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:37.689 16:13:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:37.689 16:13:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.689 16:13:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.689 ************************************ 00:12:37.689 START TEST raid_write_error_test 00:12:37.689 ************************************ 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TXAlJwiScS 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75204 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75204 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75204 ']' 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.689 16:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.689 [2024-09-28 16:13:52.308639] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:37.689 [2024-09-28 16:13:52.308790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75204 ] 00:12:37.949 [2024-09-28 16:13:52.478449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.208 [2024-09-28 16:13:52.720332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.467 [2024-09-28 16:13:52.948889] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.468 [2024-09-28 16:13:52.948930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.468 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.468 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:38.468 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.468 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.468 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.468 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 BaseBdev1_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 true 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 [2024-09-28 16:13:53.176089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.728 [2024-09-28 16:13:53.176152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.728 [2024-09-28 16:13:53.176185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.728 [2024-09-28 16:13:53.176197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.728 [2024-09-28 16:13:53.178573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.728 [2024-09-28 16:13:53.178611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.728 BaseBdev1 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 BaseBdev2_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 true 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 [2024-09-28 16:13:53.263098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.728 [2024-09-28 16:13:53.263153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.728 [2024-09-28 16:13:53.263171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.728 [2024-09-28 16:13:53.263182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.728 [2024-09-28 16:13:53.265508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.728 [2024-09-28 16:13:53.265545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.728 BaseBdev2 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 BaseBdev3_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 true 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 [2024-09-28 16:13:53.336791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:38.728 [2024-09-28 16:13:53.336849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.728 [2024-09-28 16:13:53.336881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:38.728 [2024-09-28 16:13:53.336893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.728 [2024-09-28 16:13:53.339246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.728 [2024-09-28 16:13:53.339283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.728 BaseBdev3 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 BaseBdev4_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 true 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.728 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.728 [2024-09-28 16:13:53.407565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:38.728 [2024-09-28 16:13:53.407621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.728 [2024-09-28 16:13:53.407654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:38.728 [2024-09-28 16:13:53.407667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.728 [2024-09-28 16:13:53.409972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.728 [2024-09-28 16:13:53.410009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:38.989 BaseBdev4 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.989 [2024-09-28 16:13:53.419627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.989 [2024-09-28 16:13:53.421678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.989 [2024-09-28 16:13:53.421754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.989 [2024-09-28 16:13:53.421825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:38.989 [2024-09-28 16:13:53.422071] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:38.989 [2024-09-28 16:13:53.422092] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.989 [2024-09-28 16:13:53.422337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:38.989 [2024-09-28 16:13:53.422519] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:38.989 [2024-09-28 16:13:53.422532] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:38.989 [2024-09-28 16:13:53.422678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.989 "name": "raid_bdev1", 00:12:38.989 "uuid": "5c723d07-845f-4e4b-adc9-0f94ff1bb451", 00:12:38.989 "strip_size_kb": 0, 00:12:38.989 "state": "online", 00:12:38.989 "raid_level": "raid1", 00:12:38.989 "superblock": true, 00:12:38.989 "num_base_bdevs": 4, 00:12:38.989 "num_base_bdevs_discovered": 4, 00:12:38.989 "num_base_bdevs_operational": 4, 00:12:38.989 "base_bdevs_list": [ 00:12:38.989 { 00:12:38.989 "name": "BaseBdev1", 00:12:38.989 "uuid": "4a49db49-18e5-5f0a-ab76-4770c8840f8e", 00:12:38.989 "is_configured": true, 00:12:38.989 "data_offset": 2048, 00:12:38.989 "data_size": 63488 00:12:38.989 }, 00:12:38.989 { 00:12:38.989 "name": "BaseBdev2", 00:12:38.989 "uuid": "83b165dd-a9f8-5bcc-b3a6-4dba7bd5f419", 00:12:38.989 "is_configured": true, 00:12:38.989 "data_offset": 2048, 00:12:38.989 "data_size": 63488 00:12:38.989 }, 00:12:38.989 { 00:12:38.989 "name": "BaseBdev3", 00:12:38.989 "uuid": "612cc61e-07ca-5d11-82cd-416aed1df633", 00:12:38.989 "is_configured": true, 00:12:38.989 "data_offset": 2048, 00:12:38.989 "data_size": 63488 00:12:38.989 }, 00:12:38.989 { 00:12:38.989 "name": "BaseBdev4", 00:12:38.989 "uuid": "05b7905d-e316-54f5-addb-439164fa6040", 00:12:38.989 "is_configured": true, 00:12:38.989 "data_offset": 2048, 00:12:38.989 "data_size": 63488 00:12:38.989 } 00:12:38.989 ] 00:12:38.989 }' 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.989 16:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:39.249 16:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.508 [2024-09-28 16:13:53.976007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.445 [2024-09-28 16:13:54.889360] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:40.445 [2024-09-28 16:13:54.889437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.445 [2024-09-28 16:13:54.889694] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:40.445 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.446 "name": "raid_bdev1", 00:12:40.446 "uuid": "5c723d07-845f-4e4b-adc9-0f94ff1bb451", 00:12:40.446 "strip_size_kb": 0, 00:12:40.446 "state": "online", 00:12:40.446 "raid_level": "raid1", 00:12:40.446 "superblock": true, 00:12:40.446 "num_base_bdevs": 4, 00:12:40.446 "num_base_bdevs_discovered": 3, 00:12:40.446 "num_base_bdevs_operational": 3, 00:12:40.446 "base_bdevs_list": [ 00:12:40.446 { 00:12:40.446 "name": null, 00:12:40.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.446 "is_configured": false, 00:12:40.446 "data_offset": 0, 00:12:40.446 "data_size": 63488 00:12:40.446 }, 00:12:40.446 { 00:12:40.446 "name": "BaseBdev2", 00:12:40.446 "uuid": "83b165dd-a9f8-5bcc-b3a6-4dba7bd5f419", 00:12:40.446 "is_configured": true, 00:12:40.446 "data_offset": 2048, 00:12:40.446 "data_size": 63488 00:12:40.446 }, 00:12:40.446 { 00:12:40.446 "name": "BaseBdev3", 00:12:40.446 "uuid": "612cc61e-07ca-5d11-82cd-416aed1df633", 00:12:40.446 "is_configured": true, 00:12:40.446 "data_offset": 2048, 00:12:40.446 "data_size": 63488 00:12:40.446 }, 00:12:40.446 { 00:12:40.446 "name": "BaseBdev4", 00:12:40.446 "uuid": "05b7905d-e316-54f5-addb-439164fa6040", 00:12:40.446 "is_configured": true, 00:12:40.446 "data_offset": 2048, 00:12:40.446 "data_size": 63488 00:12:40.446 } 00:12:40.446 ] 00:12:40.446 }' 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.446 16:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.706 [2024-09-28 16:13:55.350702] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.706 [2024-09-28 16:13:55.350746] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.706 [2024-09-28 16:13:55.353456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.706 [2024-09-28 16:13:55.353511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.706 [2024-09-28 16:13:55.353623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.706 [2024-09-28 16:13:55.353640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:40.706 { 00:12:40.706 "results": [ 00:12:40.706 { 00:12:40.706 "job": "raid_bdev1", 00:12:40.706 "core_mask": "0x1", 00:12:40.706 "workload": "randrw", 00:12:40.706 "percentage": 50, 00:12:40.706 "status": "finished", 00:12:40.706 "queue_depth": 1, 00:12:40.706 "io_size": 131072, 00:12:40.706 "runtime": 1.375464, 00:12:40.706 "iops": 9100.928850191645, 00:12:40.706 "mibps": 1137.6161062739557, 00:12:40.706 "io_failed": 0, 00:12:40.706 "io_timeout": 0, 00:12:40.706 "avg_latency_us": 107.44492772329242, 00:12:40.706 "min_latency_us": 21.799126637554586, 00:12:40.706 "max_latency_us": 1495.3082969432314 00:12:40.706 } 00:12:40.706 ], 00:12:40.706 "core_count": 1 00:12:40.706 } 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75204 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75204 ']' 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75204 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.706 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75204 00:12:40.965 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.965 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.965 killing process with pid 75204 00:12:40.965 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75204' 00:12:40.965 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75204 00:12:40.965 [2024-09-28 16:13:55.400502] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.965 16:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75204 00:12:41.225 [2024-09-28 16:13:55.743531] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TXAlJwiScS 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:42.602 00:12:42.602 real 0m4.936s 00:12:42.602 user 0m5.658s 00:12:42.602 sys 0m0.712s 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.602 16:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.602 ************************************ 00:12:42.602 END TEST raid_write_error_test 00:12:42.602 ************************************ 00:12:42.602 16:13:57 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:42.602 16:13:57 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:42.602 16:13:57 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:42.602 16:13:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:42.602 16:13:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.602 16:13:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.602 ************************************ 00:12:42.602 START TEST raid_rebuild_test 00:12:42.602 ************************************ 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75348 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75348 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75348 ']' 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.602 16:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.861 [2024-09-28 16:13:57.309337] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:42.861 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.861 Zero copy mechanism will not be used. 00:12:42.861 [2024-09-28 16:13:57.309913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75348 ] 00:12:42.861 [2024-09-28 16:13:57.475691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.121 [2024-09-28 16:13:57.714213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.381 [2024-09-28 16:13:57.941275] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.381 [2024-09-28 16:13:57.941320] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.641 BaseBdev1_malloc 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.641 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.641 [2024-09-28 16:13:58.181696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:43.641 [2024-09-28 16:13:58.181779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.641 [2024-09-28 16:13:58.181810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:43.642 [2024-09-28 16:13:58.181826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.642 [2024-09-28 16:13:58.184299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.642 [2024-09-28 16:13:58.184340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.642 BaseBdev1 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.642 BaseBdev2_malloc 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.642 [2024-09-28 16:13:58.252585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:43.642 [2024-09-28 16:13:58.252650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.642 [2024-09-28 16:13:58.252673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.642 [2024-09-28 16:13:58.252686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.642 [2024-09-28 16:13:58.255014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.642 [2024-09-28 16:13:58.255054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:43.642 BaseBdev2 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.642 spare_malloc 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.642 spare_delay 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.642 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.642 [2024-09-28 16:13:58.323469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.642 [2024-09-28 16:13:58.323548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.642 [2024-09-28 16:13:58.323567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:43.642 [2024-09-28 16:13:58.323579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.902 [2024-09-28 16:13:58.325957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.902 [2024-09-28 16:13:58.326000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.902 spare 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.902 [2024-09-28 16:13:58.335492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.902 [2024-09-28 16:13:58.337504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.902 [2024-09-28 16:13:58.337591] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:43.902 [2024-09-28 16:13:58.337604] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:43.902 [2024-09-28 16:13:58.337888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:43.902 [2024-09-28 16:13:58.338049] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:43.902 [2024-09-28 16:13:58.338062] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:43.902 [2024-09-28 16:13:58.338207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.902 "name": "raid_bdev1", 00:12:43.902 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:43.902 "strip_size_kb": 0, 00:12:43.902 "state": "online", 00:12:43.902 "raid_level": "raid1", 00:12:43.902 "superblock": false, 00:12:43.902 "num_base_bdevs": 2, 00:12:43.902 "num_base_bdevs_discovered": 2, 00:12:43.902 "num_base_bdevs_operational": 2, 00:12:43.902 "base_bdevs_list": [ 00:12:43.902 { 00:12:43.902 "name": "BaseBdev1", 00:12:43.902 "uuid": "415cb9fa-972d-5052-814e-c9467a5d6df4", 00:12:43.902 "is_configured": true, 00:12:43.902 "data_offset": 0, 00:12:43.902 "data_size": 65536 00:12:43.902 }, 00:12:43.902 { 00:12:43.902 "name": "BaseBdev2", 00:12:43.902 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:43.902 "is_configured": true, 00:12:43.902 "data_offset": 0, 00:12:43.902 "data_size": 65536 00:12:43.902 } 00:12:43.902 ] 00:12:43.902 }' 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.902 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:44.161 [2024-09-28 16:13:58.735087] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.161 16:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:44.421 [2024-09-28 16:13:58.990470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:44.421 /dev/nbd0 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.421 1+0 records in 00:12:44.421 1+0 records out 00:12:44.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236417 s, 17.3 MB/s 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:44.421 16:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:48.619 65536+0 records in 00:12:48.619 65536+0 records out 00:12:48.619 33554432 bytes (34 MB, 32 MiB) copied, 4.12663 s, 8.1 MB/s 00:12:48.619 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:48.619 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.619 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.619 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.619 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:48.619 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.619 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.878 [2024-09-28 16:14:03.392393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.878 [2024-09-28 16:14:03.413790] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.878 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.879 "name": "raid_bdev1", 00:12:48.879 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:48.879 "strip_size_kb": 0, 00:12:48.879 "state": "online", 00:12:48.879 "raid_level": "raid1", 00:12:48.879 "superblock": false, 00:12:48.879 "num_base_bdevs": 2, 00:12:48.879 "num_base_bdevs_discovered": 1, 00:12:48.879 "num_base_bdevs_operational": 1, 00:12:48.879 "base_bdevs_list": [ 00:12:48.879 { 00:12:48.879 "name": null, 00:12:48.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.879 "is_configured": false, 00:12:48.879 "data_offset": 0, 00:12:48.879 "data_size": 65536 00:12:48.879 }, 00:12:48.879 { 00:12:48.879 "name": "BaseBdev2", 00:12:48.879 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:48.879 "is_configured": true, 00:12:48.879 "data_offset": 0, 00:12:48.879 "data_size": 65536 00:12:48.879 } 00:12:48.879 ] 00:12:48.879 }' 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.879 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.446 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.446 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.446 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.446 [2024-09-28 16:14:03.877039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.446 [2024-09-28 16:14:03.894017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:49.446 16:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.446 16:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:49.446 [2024-09-28 16:14:03.896077] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.385 "name": "raid_bdev1", 00:12:50.385 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:50.385 "strip_size_kb": 0, 00:12:50.385 "state": "online", 00:12:50.385 "raid_level": "raid1", 00:12:50.385 "superblock": false, 00:12:50.385 "num_base_bdevs": 2, 00:12:50.385 "num_base_bdevs_discovered": 2, 00:12:50.385 "num_base_bdevs_operational": 2, 00:12:50.385 "process": { 00:12:50.385 "type": "rebuild", 00:12:50.385 "target": "spare", 00:12:50.385 "progress": { 00:12:50.385 "blocks": 20480, 00:12:50.385 "percent": 31 00:12:50.385 } 00:12:50.385 }, 00:12:50.385 "base_bdevs_list": [ 00:12:50.385 { 00:12:50.385 "name": "spare", 00:12:50.385 "uuid": "969d2096-2c93-5fde-a165-c4e86b661266", 00:12:50.385 "is_configured": true, 00:12:50.385 "data_offset": 0, 00:12:50.385 "data_size": 65536 00:12:50.385 }, 00:12:50.385 { 00:12:50.385 "name": "BaseBdev2", 00:12:50.385 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:50.385 "is_configured": true, 00:12:50.385 "data_offset": 0, 00:12:50.385 "data_size": 65536 00:12:50.385 } 00:12:50.385 ] 00:12:50.385 }' 00:12:50.385 16:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.385 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.385 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.385 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.385 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:50.385 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.385 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.385 [2024-09-28 16:14:05.059103] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.646 [2024-09-28 16:14:05.104569] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:50.646 [2024-09-28 16:14:05.104650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.646 [2024-09-28 16:14:05.104666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.646 [2024-09-28 16:14:05.104676] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.646 "name": "raid_bdev1", 00:12:50.646 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:50.646 "strip_size_kb": 0, 00:12:50.646 "state": "online", 00:12:50.646 "raid_level": "raid1", 00:12:50.646 "superblock": false, 00:12:50.646 "num_base_bdevs": 2, 00:12:50.646 "num_base_bdevs_discovered": 1, 00:12:50.646 "num_base_bdevs_operational": 1, 00:12:50.646 "base_bdevs_list": [ 00:12:50.646 { 00:12:50.646 "name": null, 00:12:50.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.646 "is_configured": false, 00:12:50.646 "data_offset": 0, 00:12:50.646 "data_size": 65536 00:12:50.646 }, 00:12:50.646 { 00:12:50.646 "name": "BaseBdev2", 00:12:50.646 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:50.646 "is_configured": true, 00:12:50.646 "data_offset": 0, 00:12:50.646 "data_size": 65536 00:12:50.646 } 00:12:50.646 ] 00:12:50.646 }' 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.646 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.912 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.180 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.180 "name": "raid_bdev1", 00:12:51.180 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:51.180 "strip_size_kb": 0, 00:12:51.180 "state": "online", 00:12:51.180 "raid_level": "raid1", 00:12:51.180 "superblock": false, 00:12:51.180 "num_base_bdevs": 2, 00:12:51.180 "num_base_bdevs_discovered": 1, 00:12:51.180 "num_base_bdevs_operational": 1, 00:12:51.180 "base_bdevs_list": [ 00:12:51.180 { 00:12:51.180 "name": null, 00:12:51.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.180 "is_configured": false, 00:12:51.180 "data_offset": 0, 00:12:51.180 "data_size": 65536 00:12:51.180 }, 00:12:51.180 { 00:12:51.180 "name": "BaseBdev2", 00:12:51.180 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:51.180 "is_configured": true, 00:12:51.180 "data_offset": 0, 00:12:51.180 "data_size": 65536 00:12:51.180 } 00:12:51.180 ] 00:12:51.180 }' 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.181 [2024-09-28 16:14:05.697651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.181 [2024-09-28 16:14:05.713383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.181 16:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:51.181 [2024-09-28 16:14:05.715469] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.119 "name": "raid_bdev1", 00:12:52.119 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:52.119 "strip_size_kb": 0, 00:12:52.119 "state": "online", 00:12:52.119 "raid_level": "raid1", 00:12:52.119 "superblock": false, 00:12:52.119 "num_base_bdevs": 2, 00:12:52.119 "num_base_bdevs_discovered": 2, 00:12:52.119 "num_base_bdevs_operational": 2, 00:12:52.119 "process": { 00:12:52.119 "type": "rebuild", 00:12:52.119 "target": "spare", 00:12:52.119 "progress": { 00:12:52.119 "blocks": 20480, 00:12:52.119 "percent": 31 00:12:52.119 } 00:12:52.119 }, 00:12:52.119 "base_bdevs_list": [ 00:12:52.119 { 00:12:52.119 "name": "spare", 00:12:52.119 "uuid": "969d2096-2c93-5fde-a165-c4e86b661266", 00:12:52.119 "is_configured": true, 00:12:52.119 "data_offset": 0, 00:12:52.119 "data_size": 65536 00:12:52.119 }, 00:12:52.119 { 00:12:52.119 "name": "BaseBdev2", 00:12:52.119 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:52.119 "is_configured": true, 00:12:52.119 "data_offset": 0, 00:12:52.119 "data_size": 65536 00:12:52.119 } 00:12:52.119 ] 00:12:52.119 }' 00:12:52.119 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=379 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.379 "name": "raid_bdev1", 00:12:52.379 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:52.379 "strip_size_kb": 0, 00:12:52.379 "state": "online", 00:12:52.379 "raid_level": "raid1", 00:12:52.379 "superblock": false, 00:12:52.379 "num_base_bdevs": 2, 00:12:52.379 "num_base_bdevs_discovered": 2, 00:12:52.379 "num_base_bdevs_operational": 2, 00:12:52.379 "process": { 00:12:52.379 "type": "rebuild", 00:12:52.379 "target": "spare", 00:12:52.379 "progress": { 00:12:52.379 "blocks": 22528, 00:12:52.379 "percent": 34 00:12:52.379 } 00:12:52.379 }, 00:12:52.379 "base_bdevs_list": [ 00:12:52.379 { 00:12:52.379 "name": "spare", 00:12:52.379 "uuid": "969d2096-2c93-5fde-a165-c4e86b661266", 00:12:52.379 "is_configured": true, 00:12:52.379 "data_offset": 0, 00:12:52.379 "data_size": 65536 00:12:52.379 }, 00:12:52.379 { 00:12:52.379 "name": "BaseBdev2", 00:12:52.379 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:52.379 "is_configured": true, 00:12:52.379 "data_offset": 0, 00:12:52.379 "data_size": 65536 00:12:52.379 } 00:12:52.379 ] 00:12:52.379 }' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.379 16:14:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.319 16:14:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.319 16:14:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.319 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.319 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.319 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.319 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.580 "name": "raid_bdev1", 00:12:53.580 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:53.580 "strip_size_kb": 0, 00:12:53.580 "state": "online", 00:12:53.580 "raid_level": "raid1", 00:12:53.580 "superblock": false, 00:12:53.580 "num_base_bdevs": 2, 00:12:53.580 "num_base_bdevs_discovered": 2, 00:12:53.580 "num_base_bdevs_operational": 2, 00:12:53.580 "process": { 00:12:53.580 "type": "rebuild", 00:12:53.580 "target": "spare", 00:12:53.580 "progress": { 00:12:53.580 "blocks": 45056, 00:12:53.580 "percent": 68 00:12:53.580 } 00:12:53.580 }, 00:12:53.580 "base_bdevs_list": [ 00:12:53.580 { 00:12:53.580 "name": "spare", 00:12:53.580 "uuid": "969d2096-2c93-5fde-a165-c4e86b661266", 00:12:53.580 "is_configured": true, 00:12:53.580 "data_offset": 0, 00:12:53.580 "data_size": 65536 00:12:53.580 }, 00:12:53.580 { 00:12:53.580 "name": "BaseBdev2", 00:12:53.580 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:53.580 "is_configured": true, 00:12:53.580 "data_offset": 0, 00:12:53.580 "data_size": 65536 00:12:53.580 } 00:12:53.580 ] 00:12:53.580 }' 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.580 16:14:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.519 [2024-09-28 16:14:08.937668] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:54.519 [2024-09-28 16:14:08.937794] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:54.519 [2024-09-28 16:14:08.937852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.519 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.779 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.779 "name": "raid_bdev1", 00:12:54.779 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:54.779 "strip_size_kb": 0, 00:12:54.779 "state": "online", 00:12:54.779 "raid_level": "raid1", 00:12:54.779 "superblock": false, 00:12:54.779 "num_base_bdevs": 2, 00:12:54.779 "num_base_bdevs_discovered": 2, 00:12:54.779 "num_base_bdevs_operational": 2, 00:12:54.779 "base_bdevs_list": [ 00:12:54.779 { 00:12:54.779 "name": "spare", 00:12:54.779 "uuid": "969d2096-2c93-5fde-a165-c4e86b661266", 00:12:54.779 "is_configured": true, 00:12:54.779 "data_offset": 0, 00:12:54.779 "data_size": 65536 00:12:54.779 }, 00:12:54.779 { 00:12:54.779 "name": "BaseBdev2", 00:12:54.779 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:54.780 "is_configured": true, 00:12:54.780 "data_offset": 0, 00:12:54.780 "data_size": 65536 00:12:54.780 } 00:12:54.780 ] 00:12:54.780 }' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.780 "name": "raid_bdev1", 00:12:54.780 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:54.780 "strip_size_kb": 0, 00:12:54.780 "state": "online", 00:12:54.780 "raid_level": "raid1", 00:12:54.780 "superblock": false, 00:12:54.780 "num_base_bdevs": 2, 00:12:54.780 "num_base_bdevs_discovered": 2, 00:12:54.780 "num_base_bdevs_operational": 2, 00:12:54.780 "base_bdevs_list": [ 00:12:54.780 { 00:12:54.780 "name": "spare", 00:12:54.780 "uuid": "969d2096-2c93-5fde-a165-c4e86b661266", 00:12:54.780 "is_configured": true, 00:12:54.780 "data_offset": 0, 00:12:54.780 "data_size": 65536 00:12:54.780 }, 00:12:54.780 { 00:12:54.780 "name": "BaseBdev2", 00:12:54.780 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:54.780 "is_configured": true, 00:12:54.780 "data_offset": 0, 00:12:54.780 "data_size": 65536 00:12:54.780 } 00:12:54.780 ] 00:12:54.780 }' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.780 "name": "raid_bdev1", 00:12:54.780 "uuid": "5be12e25-992c-4b73-8fe7-fc453381905c", 00:12:54.780 "strip_size_kb": 0, 00:12:54.780 "state": "online", 00:12:54.780 "raid_level": "raid1", 00:12:54.780 "superblock": false, 00:12:54.780 "num_base_bdevs": 2, 00:12:54.780 "num_base_bdevs_discovered": 2, 00:12:54.780 "num_base_bdevs_operational": 2, 00:12:54.780 "base_bdevs_list": [ 00:12:54.780 { 00:12:54.780 "name": "spare", 00:12:54.780 "uuid": "969d2096-2c93-5fde-a165-c4e86b661266", 00:12:54.780 "is_configured": true, 00:12:54.780 "data_offset": 0, 00:12:54.780 "data_size": 65536 00:12:54.780 }, 00:12:54.780 { 00:12:54.780 "name": "BaseBdev2", 00:12:54.780 "uuid": "033b01c5-5318-57bb-9d38-91b70d84b36f", 00:12:54.780 "is_configured": true, 00:12:54.780 "data_offset": 0, 00:12:54.780 "data_size": 65536 00:12:54.780 } 00:12:54.780 ] 00:12:54.780 }' 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.780 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.349 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.349 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.349 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.349 [2024-09-28 16:14:09.835111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.350 [2024-09-28 16:14:09.835151] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.350 [2024-09-28 16:14:09.835255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.350 [2024-09-28 16:14:09.835333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.350 [2024-09-28 16:14:09.835343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.350 16:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:55.609 /dev/nbd0 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.609 1+0 records in 00:12:55.609 1+0 records out 00:12:55.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219703 s, 18.6 MB/s 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.609 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:55.870 /dev/nbd1 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.870 1+0 records in 00:12:55.870 1+0 records out 00:12:55.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037251 s, 11.0 MB/s 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.870 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.130 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:56.390 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75348 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75348 ']' 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75348 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.391 16:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75348 00:12:56.391 16:14:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.391 16:14:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.391 killing process with pid 75348 00:12:56.391 16:14:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75348' 00:12:56.391 16:14:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75348 00:12:56.391 Received shutdown signal, test time was about 60.000000 seconds 00:12:56.391 00:12:56.391 Latency(us) 00:12:56.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.391 =================================================================================================================== 00:12:56.391 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:56.391 [2024-09-28 16:14:11.012481] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.391 16:14:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75348 00:12:56.651 [2024-09-28 16:14:11.321446] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.032 16:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:58.032 00:12:58.032 real 0m15.427s 00:12:58.032 user 0m17.221s 00:12:58.032 sys 0m3.202s 00:12:58.032 16:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.032 16:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.033 ************************************ 00:12:58.033 END TEST raid_rebuild_test 00:12:58.033 ************************************ 00:12:58.033 16:14:12 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:58.033 16:14:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:58.033 16:14:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.033 16:14:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.033 ************************************ 00:12:58.033 START TEST raid_rebuild_test_sb 00:12:58.033 ************************************ 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:58.033 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.292 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.292 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75768 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75768 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75768 ']' 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.293 16:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.293 [2024-09-28 16:14:12.811825] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:58.293 [2024-09-28 16:14:12.812004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75768 ] 00:12:58.293 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:58.293 Zero copy mechanism will not be used. 00:12:58.552 [2024-09-28 16:14:12.979848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.552 [2024-09-28 16:14:13.224500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.811 [2024-09-28 16:14:13.449006] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.811 [2024-09-28 16:14:13.449150] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.069 BaseBdev1_malloc 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.069 [2024-09-28 16:14:13.673714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:59.069 [2024-09-28 16:14:13.673789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.069 [2024-09-28 16:14:13.673811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.069 [2024-09-28 16:14:13.673827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.069 [2024-09-28 16:14:13.676197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.069 [2024-09-28 16:14:13.676243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.069 BaseBdev1 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.069 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.327 BaseBdev2_malloc 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.327 [2024-09-28 16:14:13.760871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:59.327 [2024-09-28 16:14:13.760928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.327 [2024-09-28 16:14:13.760946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.327 [2024-09-28 16:14:13.760958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.327 [2024-09-28 16:14:13.763224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.327 [2024-09-28 16:14:13.763270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.327 BaseBdev2 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.327 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.327 spare_malloc 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.328 spare_delay 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.328 [2024-09-28 16:14:13.835046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.328 [2024-09-28 16:14:13.835104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.328 [2024-09-28 16:14:13.835123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:59.328 [2024-09-28 16:14:13.835134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.328 [2024-09-28 16:14:13.837442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.328 [2024-09-28 16:14:13.837564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.328 spare 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.328 [2024-09-28 16:14:13.847090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.328 [2024-09-28 16:14:13.849095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.328 [2024-09-28 16:14:13.849327] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.328 [2024-09-28 16:14:13.849347] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.328 [2024-09-28 16:14:13.849597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.328 [2024-09-28 16:14:13.849760] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.328 [2024-09-28 16:14:13.849769] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.328 [2024-09-28 16:14:13.849907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.328 "name": "raid_bdev1", 00:12:59.328 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:12:59.328 "strip_size_kb": 0, 00:12:59.328 "state": "online", 00:12:59.328 "raid_level": "raid1", 00:12:59.328 "superblock": true, 00:12:59.328 "num_base_bdevs": 2, 00:12:59.328 "num_base_bdevs_discovered": 2, 00:12:59.328 "num_base_bdevs_operational": 2, 00:12:59.328 "base_bdevs_list": [ 00:12:59.328 { 00:12:59.328 "name": "BaseBdev1", 00:12:59.328 "uuid": "7990d522-d5ef-5410-b886-0eba21ab5531", 00:12:59.328 "is_configured": true, 00:12:59.328 "data_offset": 2048, 00:12:59.328 "data_size": 63488 00:12:59.328 }, 00:12:59.328 { 00:12:59.328 "name": "BaseBdev2", 00:12:59.328 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:12:59.328 "is_configured": true, 00:12:59.328 "data_offset": 2048, 00:12:59.328 "data_size": 63488 00:12:59.328 } 00:12:59.328 ] 00:12:59.328 }' 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.328 16:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.588 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.588 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:59.588 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.588 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.588 [2024-09-28 16:14:14.254621] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.848 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:59.848 [2024-09-28 16:14:14.501995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:59.848 /dev/nbd0 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.108 1+0 records in 00:13:00.108 1+0 records out 00:13:00.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039096 s, 10.5 MB/s 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:00.108 16:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:04.299 63488+0 records in 00:13:04.299 63488+0 records out 00:13:04.299 32505856 bytes (33 MB, 31 MiB) copied, 4.10201 s, 7.9 MB/s 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.299 [2024-09-28 16:14:18.894359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.299 [2024-09-28 16:14:18.918448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.299 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.299 "name": "raid_bdev1", 00:13:04.299 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:04.299 "strip_size_kb": 0, 00:13:04.299 "state": "online", 00:13:04.299 "raid_level": "raid1", 00:13:04.299 "superblock": true, 00:13:04.299 "num_base_bdevs": 2, 00:13:04.299 "num_base_bdevs_discovered": 1, 00:13:04.299 "num_base_bdevs_operational": 1, 00:13:04.299 "base_bdevs_list": [ 00:13:04.299 { 00:13:04.299 "name": null, 00:13:04.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.300 "is_configured": false, 00:13:04.300 "data_offset": 0, 00:13:04.300 "data_size": 63488 00:13:04.300 }, 00:13:04.300 { 00:13:04.300 "name": "BaseBdev2", 00:13:04.300 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:04.300 "is_configured": true, 00:13:04.300 "data_offset": 2048, 00:13:04.300 "data_size": 63488 00:13:04.300 } 00:13:04.300 ] 00:13:04.300 }' 00:13:04.300 16:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.300 16:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.867 16:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.867 16:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.867 16:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.867 [2024-09-28 16:14:19.357723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.867 [2024-09-28 16:14:19.375286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:04.867 16:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.867 16:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:04.867 [2024-09-28 16:14:19.377460] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.806 "name": "raid_bdev1", 00:13:05.806 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:05.806 "strip_size_kb": 0, 00:13:05.806 "state": "online", 00:13:05.806 "raid_level": "raid1", 00:13:05.806 "superblock": true, 00:13:05.806 "num_base_bdevs": 2, 00:13:05.806 "num_base_bdevs_discovered": 2, 00:13:05.806 "num_base_bdevs_operational": 2, 00:13:05.806 "process": { 00:13:05.806 "type": "rebuild", 00:13:05.806 "target": "spare", 00:13:05.806 "progress": { 00:13:05.806 "blocks": 20480, 00:13:05.806 "percent": 32 00:13:05.806 } 00:13:05.806 }, 00:13:05.806 "base_bdevs_list": [ 00:13:05.806 { 00:13:05.806 "name": "spare", 00:13:05.806 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:05.806 "is_configured": true, 00:13:05.806 "data_offset": 2048, 00:13:05.806 "data_size": 63488 00:13:05.806 }, 00:13:05.806 { 00:13:05.806 "name": "BaseBdev2", 00:13:05.806 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:05.806 "is_configured": true, 00:13:05.806 "data_offset": 2048, 00:13:05.806 "data_size": 63488 00:13:05.806 } 00:13:05.806 ] 00:13:05.806 }' 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.806 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.069 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.069 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.069 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.069 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.069 [2024-09-28 16:14:20.536366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.069 [2024-09-28 16:14:20.585926] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.069 [2024-09-28 16:14:20.585986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.069 [2024-09-28 16:14:20.586001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.070 [2024-09-28 16:14:20.586012] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.070 "name": "raid_bdev1", 00:13:06.070 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:06.070 "strip_size_kb": 0, 00:13:06.070 "state": "online", 00:13:06.070 "raid_level": "raid1", 00:13:06.070 "superblock": true, 00:13:06.070 "num_base_bdevs": 2, 00:13:06.070 "num_base_bdevs_discovered": 1, 00:13:06.070 "num_base_bdevs_operational": 1, 00:13:06.070 "base_bdevs_list": [ 00:13:06.070 { 00:13:06.070 "name": null, 00:13:06.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.070 "is_configured": false, 00:13:06.070 "data_offset": 0, 00:13:06.070 "data_size": 63488 00:13:06.070 }, 00:13:06.070 { 00:13:06.070 "name": "BaseBdev2", 00:13:06.070 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:06.070 "is_configured": true, 00:13:06.070 "data_offset": 2048, 00:13:06.070 "data_size": 63488 00:13:06.070 } 00:13:06.070 ] 00:13:06.070 }' 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.070 16:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.677 "name": "raid_bdev1", 00:13:06.677 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:06.677 "strip_size_kb": 0, 00:13:06.677 "state": "online", 00:13:06.677 "raid_level": "raid1", 00:13:06.677 "superblock": true, 00:13:06.677 "num_base_bdevs": 2, 00:13:06.677 "num_base_bdevs_discovered": 1, 00:13:06.677 "num_base_bdevs_operational": 1, 00:13:06.677 "base_bdevs_list": [ 00:13:06.677 { 00:13:06.677 "name": null, 00:13:06.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.677 "is_configured": false, 00:13:06.677 "data_offset": 0, 00:13:06.677 "data_size": 63488 00:13:06.677 }, 00:13:06.677 { 00:13:06.677 "name": "BaseBdev2", 00:13:06.677 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:06.677 "is_configured": true, 00:13:06.677 "data_offset": 2048, 00:13:06.677 "data_size": 63488 00:13:06.677 } 00:13:06.677 ] 00:13:06.677 }' 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.677 [2024-09-28 16:14:21.222154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.677 [2024-09-28 16:14:21.238008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.677 16:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:06.677 [2024-09-28 16:14:21.240148] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.615 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.615 "name": "raid_bdev1", 00:13:07.615 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:07.615 "strip_size_kb": 0, 00:13:07.615 "state": "online", 00:13:07.615 "raid_level": "raid1", 00:13:07.615 "superblock": true, 00:13:07.615 "num_base_bdevs": 2, 00:13:07.615 "num_base_bdevs_discovered": 2, 00:13:07.615 "num_base_bdevs_operational": 2, 00:13:07.615 "process": { 00:13:07.615 "type": "rebuild", 00:13:07.615 "target": "spare", 00:13:07.615 "progress": { 00:13:07.615 "blocks": 20480, 00:13:07.615 "percent": 32 00:13:07.615 } 00:13:07.615 }, 00:13:07.615 "base_bdevs_list": [ 00:13:07.615 { 00:13:07.615 "name": "spare", 00:13:07.615 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:07.615 "is_configured": true, 00:13:07.615 "data_offset": 2048, 00:13:07.615 "data_size": 63488 00:13:07.615 }, 00:13:07.615 { 00:13:07.615 "name": "BaseBdev2", 00:13:07.615 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:07.615 "is_configured": true, 00:13:07.615 "data_offset": 2048, 00:13:07.615 "data_size": 63488 00:13:07.615 } 00:13:07.615 ] 00:13:07.616 }' 00:13:07.616 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:07.875 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.875 "name": "raid_bdev1", 00:13:07.875 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:07.875 "strip_size_kb": 0, 00:13:07.875 "state": "online", 00:13:07.875 "raid_level": "raid1", 00:13:07.875 "superblock": true, 00:13:07.875 "num_base_bdevs": 2, 00:13:07.875 "num_base_bdevs_discovered": 2, 00:13:07.875 "num_base_bdevs_operational": 2, 00:13:07.875 "process": { 00:13:07.875 "type": "rebuild", 00:13:07.875 "target": "spare", 00:13:07.875 "progress": { 00:13:07.875 "blocks": 22528, 00:13:07.875 "percent": 35 00:13:07.875 } 00:13:07.875 }, 00:13:07.875 "base_bdevs_list": [ 00:13:07.875 { 00:13:07.875 "name": "spare", 00:13:07.875 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:07.875 "is_configured": true, 00:13:07.875 "data_offset": 2048, 00:13:07.875 "data_size": 63488 00:13:07.875 }, 00:13:07.875 { 00:13:07.875 "name": "BaseBdev2", 00:13:07.875 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:07.875 "is_configured": true, 00:13:07.875 "data_offset": 2048, 00:13:07.875 "data_size": 63488 00:13:07.875 } 00:13:07.875 ] 00:13:07.875 }' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.875 16:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.255 "name": "raid_bdev1", 00:13:09.255 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:09.255 "strip_size_kb": 0, 00:13:09.255 "state": "online", 00:13:09.255 "raid_level": "raid1", 00:13:09.255 "superblock": true, 00:13:09.255 "num_base_bdevs": 2, 00:13:09.255 "num_base_bdevs_discovered": 2, 00:13:09.255 "num_base_bdevs_operational": 2, 00:13:09.255 "process": { 00:13:09.255 "type": "rebuild", 00:13:09.255 "target": "spare", 00:13:09.255 "progress": { 00:13:09.255 "blocks": 45056, 00:13:09.255 "percent": 70 00:13:09.255 } 00:13:09.255 }, 00:13:09.255 "base_bdevs_list": [ 00:13:09.255 { 00:13:09.255 "name": "spare", 00:13:09.255 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:09.255 "is_configured": true, 00:13:09.255 "data_offset": 2048, 00:13:09.255 "data_size": 63488 00:13:09.255 }, 00:13:09.255 { 00:13:09.255 "name": "BaseBdev2", 00:13:09.255 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:09.255 "is_configured": true, 00:13:09.255 "data_offset": 2048, 00:13:09.255 "data_size": 63488 00:13:09.255 } 00:13:09.255 ] 00:13:09.255 }' 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.255 16:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.824 [2024-09-28 16:14:24.361467] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:09.824 [2024-09-28 16:14:24.361613] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:09.824 [2024-09-28 16:14:24.361757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.084 "name": "raid_bdev1", 00:13:10.084 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:10.084 "strip_size_kb": 0, 00:13:10.084 "state": "online", 00:13:10.084 "raid_level": "raid1", 00:13:10.084 "superblock": true, 00:13:10.084 "num_base_bdevs": 2, 00:13:10.084 "num_base_bdevs_discovered": 2, 00:13:10.084 "num_base_bdevs_operational": 2, 00:13:10.084 "base_bdevs_list": [ 00:13:10.084 { 00:13:10.084 "name": "spare", 00:13:10.084 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:10.084 "is_configured": true, 00:13:10.084 "data_offset": 2048, 00:13:10.084 "data_size": 63488 00:13:10.084 }, 00:13:10.084 { 00:13:10.084 "name": "BaseBdev2", 00:13:10.084 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:10.084 "is_configured": true, 00:13:10.084 "data_offset": 2048, 00:13:10.084 "data_size": 63488 00:13:10.084 } 00:13:10.084 ] 00:13:10.084 }' 00:13:10.084 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.344 "name": "raid_bdev1", 00:13:10.344 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:10.344 "strip_size_kb": 0, 00:13:10.344 "state": "online", 00:13:10.344 "raid_level": "raid1", 00:13:10.344 "superblock": true, 00:13:10.344 "num_base_bdevs": 2, 00:13:10.344 "num_base_bdevs_discovered": 2, 00:13:10.344 "num_base_bdevs_operational": 2, 00:13:10.344 "base_bdevs_list": [ 00:13:10.344 { 00:13:10.344 "name": "spare", 00:13:10.344 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:10.344 "is_configured": true, 00:13:10.344 "data_offset": 2048, 00:13:10.344 "data_size": 63488 00:13:10.344 }, 00:13:10.344 { 00:13:10.344 "name": "BaseBdev2", 00:13:10.344 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:10.344 "is_configured": true, 00:13:10.344 "data_offset": 2048, 00:13:10.344 "data_size": 63488 00:13:10.344 } 00:13:10.344 ] 00:13:10.344 }' 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.344 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.344 "name": "raid_bdev1", 00:13:10.344 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:10.344 "strip_size_kb": 0, 00:13:10.345 "state": "online", 00:13:10.345 "raid_level": "raid1", 00:13:10.345 "superblock": true, 00:13:10.345 "num_base_bdevs": 2, 00:13:10.345 "num_base_bdevs_discovered": 2, 00:13:10.345 "num_base_bdevs_operational": 2, 00:13:10.345 "base_bdevs_list": [ 00:13:10.345 { 00:13:10.345 "name": "spare", 00:13:10.345 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:10.345 "is_configured": true, 00:13:10.345 "data_offset": 2048, 00:13:10.345 "data_size": 63488 00:13:10.345 }, 00:13:10.345 { 00:13:10.345 "name": "BaseBdev2", 00:13:10.345 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:10.345 "is_configured": true, 00:13:10.345 "data_offset": 2048, 00:13:10.345 "data_size": 63488 00:13:10.345 } 00:13:10.345 ] 00:13:10.345 }' 00:13:10.345 16:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.345 16:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.913 [2024-09-28 16:14:25.352994] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.913 [2024-09-28 16:14:25.353078] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.913 [2024-09-28 16:14:25.353181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.913 [2024-09-28 16:14:25.353304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.913 [2024-09-28 16:14:25.353371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:10.913 /dev/nbd0 00:13:10.913 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.173 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.173 1+0 records in 00:13:11.173 1+0 records out 00:13:11.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046768 s, 8.8 MB/s 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:11.174 /dev/nbd1 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.174 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.433 1+0 records in 00:13:11.433 1+0 records out 00:13:11.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456232 s, 9.0 MB/s 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.433 16:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:11.433 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:11.433 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.433 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:11.433 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.433 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:11.433 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.433 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:11.691 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.692 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.692 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.950 [2024-09-28 16:14:26.541216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.950 [2024-09-28 16:14:26.541283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.950 [2024-09-28 16:14:26.541308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:11.950 [2024-09-28 16:14:26.541319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.950 [2024-09-28 16:14:26.543870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.950 [2024-09-28 16:14:26.543957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.950 [2024-09-28 16:14:26.544067] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:11.950 [2024-09-28 16:14:26.544121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.950 [2024-09-28 16:14:26.544274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.950 spare 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.950 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.210 [2024-09-28 16:14:26.644172] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:12.210 [2024-09-28 16:14:26.644202] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.210 [2024-09-28 16:14:26.644512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:12.210 [2024-09-28 16:14:26.644710] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:12.210 [2024-09-28 16:14:26.644728] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:12.210 [2024-09-28 16:14:26.644904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.210 "name": "raid_bdev1", 00:13:12.210 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:12.210 "strip_size_kb": 0, 00:13:12.210 "state": "online", 00:13:12.210 "raid_level": "raid1", 00:13:12.210 "superblock": true, 00:13:12.210 "num_base_bdevs": 2, 00:13:12.210 "num_base_bdevs_discovered": 2, 00:13:12.210 "num_base_bdevs_operational": 2, 00:13:12.210 "base_bdevs_list": [ 00:13:12.210 { 00:13:12.210 "name": "spare", 00:13:12.210 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:12.210 "is_configured": true, 00:13:12.210 "data_offset": 2048, 00:13:12.210 "data_size": 63488 00:13:12.210 }, 00:13:12.210 { 00:13:12.210 "name": "BaseBdev2", 00:13:12.210 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:12.210 "is_configured": true, 00:13:12.210 "data_offset": 2048, 00:13:12.210 "data_size": 63488 00:13:12.210 } 00:13:12.210 ] 00:13:12.210 }' 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.210 16:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.470 "name": "raid_bdev1", 00:13:12.470 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:12.470 "strip_size_kb": 0, 00:13:12.470 "state": "online", 00:13:12.470 "raid_level": "raid1", 00:13:12.470 "superblock": true, 00:13:12.470 "num_base_bdevs": 2, 00:13:12.470 "num_base_bdevs_discovered": 2, 00:13:12.470 "num_base_bdevs_operational": 2, 00:13:12.470 "base_bdevs_list": [ 00:13:12.470 { 00:13:12.470 "name": "spare", 00:13:12.470 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:12.470 "is_configured": true, 00:13:12.470 "data_offset": 2048, 00:13:12.470 "data_size": 63488 00:13:12.470 }, 00:13:12.470 { 00:13:12.470 "name": "BaseBdev2", 00:13:12.470 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:12.470 "is_configured": true, 00:13:12.470 "data_offset": 2048, 00:13:12.470 "data_size": 63488 00:13:12.470 } 00:13:12.470 ] 00:13:12.470 }' 00:13:12.470 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.729 [2024-09-28 16:14:27.291996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.729 "name": "raid_bdev1", 00:13:12.729 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:12.729 "strip_size_kb": 0, 00:13:12.729 "state": "online", 00:13:12.729 "raid_level": "raid1", 00:13:12.729 "superblock": true, 00:13:12.729 "num_base_bdevs": 2, 00:13:12.729 "num_base_bdevs_discovered": 1, 00:13:12.729 "num_base_bdevs_operational": 1, 00:13:12.729 "base_bdevs_list": [ 00:13:12.729 { 00:13:12.729 "name": null, 00:13:12.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.729 "is_configured": false, 00:13:12.729 "data_offset": 0, 00:13:12.729 "data_size": 63488 00:13:12.729 }, 00:13:12.729 { 00:13:12.729 "name": "BaseBdev2", 00:13:12.729 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:12.729 "is_configured": true, 00:13:12.729 "data_offset": 2048, 00:13:12.729 "data_size": 63488 00:13:12.729 } 00:13:12.729 ] 00:13:12.729 }' 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.729 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.296 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.296 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.296 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.296 [2024-09-28 16:14:27.751260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.296 [2024-09-28 16:14:27.751515] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.296 [2024-09-28 16:14:27.751578] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.296 [2024-09-28 16:14:27.751637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.296 [2024-09-28 16:14:27.767401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:13.296 16:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.296 16:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:13.296 [2024-09-28 16:14:27.769601] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.234 "name": "raid_bdev1", 00:13:14.234 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:14.234 "strip_size_kb": 0, 00:13:14.234 "state": "online", 00:13:14.234 "raid_level": "raid1", 00:13:14.234 "superblock": true, 00:13:14.234 "num_base_bdevs": 2, 00:13:14.234 "num_base_bdevs_discovered": 2, 00:13:14.234 "num_base_bdevs_operational": 2, 00:13:14.234 "process": { 00:13:14.234 "type": "rebuild", 00:13:14.234 "target": "spare", 00:13:14.234 "progress": { 00:13:14.234 "blocks": 20480, 00:13:14.234 "percent": 32 00:13:14.234 } 00:13:14.234 }, 00:13:14.234 "base_bdevs_list": [ 00:13:14.234 { 00:13:14.234 "name": "spare", 00:13:14.234 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:14.234 "is_configured": true, 00:13:14.234 "data_offset": 2048, 00:13:14.234 "data_size": 63488 00:13:14.234 }, 00:13:14.234 { 00:13:14.234 "name": "BaseBdev2", 00:13:14.234 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:14.234 "is_configured": true, 00:13:14.234 "data_offset": 2048, 00:13:14.234 "data_size": 63488 00:13:14.234 } 00:13:14.234 ] 00:13:14.234 }' 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.234 16:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.493 [2024-09-28 16:14:28.916837] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.493 [2024-09-28 16:14:28.978317] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.493 [2024-09-28 16:14:28.978386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.493 [2024-09-28 16:14:28.978400] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.493 [2024-09-28 16:14:28.978411] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.493 "name": "raid_bdev1", 00:13:14.493 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:14.493 "strip_size_kb": 0, 00:13:14.493 "state": "online", 00:13:14.493 "raid_level": "raid1", 00:13:14.493 "superblock": true, 00:13:14.493 "num_base_bdevs": 2, 00:13:14.493 "num_base_bdevs_discovered": 1, 00:13:14.493 "num_base_bdevs_operational": 1, 00:13:14.493 "base_bdevs_list": [ 00:13:14.493 { 00:13:14.493 "name": null, 00:13:14.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.493 "is_configured": false, 00:13:14.493 "data_offset": 0, 00:13:14.493 "data_size": 63488 00:13:14.493 }, 00:13:14.493 { 00:13:14.493 "name": "BaseBdev2", 00:13:14.493 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:14.493 "is_configured": true, 00:13:14.493 "data_offset": 2048, 00:13:14.493 "data_size": 63488 00:13:14.493 } 00:13:14.493 ] 00:13:14.493 }' 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.493 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.751 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.751 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.751 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.751 [2024-09-28 16:14:29.423075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.751 [2024-09-28 16:14:29.423199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.751 [2024-09-28 16:14:29.423246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:14.751 [2024-09-28 16:14:29.423287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.751 [2024-09-28 16:14:29.423859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.751 [2024-09-28 16:14:29.423926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.751 [2024-09-28 16:14:29.424052] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:14.751 [2024-09-28 16:14:29.424096] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:14.751 [2024-09-28 16:14:29.424141] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:14.751 [2024-09-28 16:14:29.424252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.009 [2024-09-28 16:14:29.439231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:15.009 spare 00:13:15.009 16:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.009 16:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:15.009 [2024-09-28 16:14:29.441391] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.947 "name": "raid_bdev1", 00:13:15.947 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:15.947 "strip_size_kb": 0, 00:13:15.947 "state": "online", 00:13:15.947 "raid_level": "raid1", 00:13:15.947 "superblock": true, 00:13:15.947 "num_base_bdevs": 2, 00:13:15.947 "num_base_bdevs_discovered": 2, 00:13:15.947 "num_base_bdevs_operational": 2, 00:13:15.947 "process": { 00:13:15.947 "type": "rebuild", 00:13:15.947 "target": "spare", 00:13:15.947 "progress": { 00:13:15.947 "blocks": 20480, 00:13:15.947 "percent": 32 00:13:15.947 } 00:13:15.947 }, 00:13:15.947 "base_bdevs_list": [ 00:13:15.947 { 00:13:15.947 "name": "spare", 00:13:15.947 "uuid": "aa2fcc94-a5c0-5d23-8eb2-9b2ed3f82c5f", 00:13:15.947 "is_configured": true, 00:13:15.947 "data_offset": 2048, 00:13:15.947 "data_size": 63488 00:13:15.947 }, 00:13:15.947 { 00:13:15.947 "name": "BaseBdev2", 00:13:15.947 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:15.947 "is_configured": true, 00:13:15.947 "data_offset": 2048, 00:13:15.947 "data_size": 63488 00:13:15.947 } 00:13:15.947 ] 00:13:15.947 }' 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.947 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.947 [2024-09-28 16:14:30.581144] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.207 [2024-09-28 16:14:30.649784] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.207 [2024-09-28 16:14:30.649888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.207 [2024-09-28 16:14:30.649927] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.207 [2024-09-28 16:14:30.649947] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.207 "name": "raid_bdev1", 00:13:16.207 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:16.207 "strip_size_kb": 0, 00:13:16.207 "state": "online", 00:13:16.207 "raid_level": "raid1", 00:13:16.207 "superblock": true, 00:13:16.207 "num_base_bdevs": 2, 00:13:16.207 "num_base_bdevs_discovered": 1, 00:13:16.207 "num_base_bdevs_operational": 1, 00:13:16.207 "base_bdevs_list": [ 00:13:16.207 { 00:13:16.207 "name": null, 00:13:16.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.207 "is_configured": false, 00:13:16.207 "data_offset": 0, 00:13:16.207 "data_size": 63488 00:13:16.207 }, 00:13:16.207 { 00:13:16.207 "name": "BaseBdev2", 00:13:16.207 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:16.207 "is_configured": true, 00:13:16.207 "data_offset": 2048, 00:13:16.207 "data_size": 63488 00:13:16.207 } 00:13:16.207 ] 00:13:16.207 }' 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.207 16:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.776 "name": "raid_bdev1", 00:13:16.776 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:16.776 "strip_size_kb": 0, 00:13:16.776 "state": "online", 00:13:16.776 "raid_level": "raid1", 00:13:16.776 "superblock": true, 00:13:16.776 "num_base_bdevs": 2, 00:13:16.776 "num_base_bdevs_discovered": 1, 00:13:16.776 "num_base_bdevs_operational": 1, 00:13:16.776 "base_bdevs_list": [ 00:13:16.776 { 00:13:16.776 "name": null, 00:13:16.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.776 "is_configured": false, 00:13:16.776 "data_offset": 0, 00:13:16.776 "data_size": 63488 00:13:16.776 }, 00:13:16.776 { 00:13:16.776 "name": "BaseBdev2", 00:13:16.776 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:16.776 "is_configured": true, 00:13:16.776 "data_offset": 2048, 00:13:16.776 "data_size": 63488 00:13:16.776 } 00:13:16.776 ] 00:13:16.776 }' 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.776 [2024-09-28 16:14:31.333570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:16.776 [2024-09-28 16:14:31.333702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.776 [2024-09-28 16:14:31.333732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:16.776 [2024-09-28 16:14:31.333741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.776 [2024-09-28 16:14:31.334300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.776 [2024-09-28 16:14:31.334320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.776 [2024-09-28 16:14:31.334419] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:16.776 [2024-09-28 16:14:31.334435] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:16.776 [2024-09-28 16:14:31.334450] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.776 [2024-09-28 16:14:31.334462] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:16.776 BaseBdev1 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.776 16:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.714 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.714 "name": "raid_bdev1", 00:13:17.714 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:17.714 "strip_size_kb": 0, 00:13:17.714 "state": "online", 00:13:17.714 "raid_level": "raid1", 00:13:17.714 "superblock": true, 00:13:17.714 "num_base_bdevs": 2, 00:13:17.714 "num_base_bdevs_discovered": 1, 00:13:17.714 "num_base_bdevs_operational": 1, 00:13:17.714 "base_bdevs_list": [ 00:13:17.714 { 00:13:17.714 "name": null, 00:13:17.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.714 "is_configured": false, 00:13:17.714 "data_offset": 0, 00:13:17.714 "data_size": 63488 00:13:17.714 }, 00:13:17.714 { 00:13:17.714 "name": "BaseBdev2", 00:13:17.714 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:17.714 "is_configured": true, 00:13:17.714 "data_offset": 2048, 00:13:17.714 "data_size": 63488 00:13:17.714 } 00:13:17.714 ] 00:13:17.714 }' 00:13:17.974 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.974 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.234 "name": "raid_bdev1", 00:13:18.234 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:18.234 "strip_size_kb": 0, 00:13:18.234 "state": "online", 00:13:18.234 "raid_level": "raid1", 00:13:18.234 "superblock": true, 00:13:18.234 "num_base_bdevs": 2, 00:13:18.234 "num_base_bdevs_discovered": 1, 00:13:18.234 "num_base_bdevs_operational": 1, 00:13:18.234 "base_bdevs_list": [ 00:13:18.234 { 00:13:18.234 "name": null, 00:13:18.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.234 "is_configured": false, 00:13:18.234 "data_offset": 0, 00:13:18.234 "data_size": 63488 00:13:18.234 }, 00:13:18.234 { 00:13:18.234 "name": "BaseBdev2", 00:13:18.234 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:18.234 "is_configured": true, 00:13:18.234 "data_offset": 2048, 00:13:18.234 "data_size": 63488 00:13:18.234 } 00:13:18.234 ] 00:13:18.234 }' 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.234 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.494 [2024-09-28 16:14:32.927050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.494 [2024-09-28 16:14:32.927321] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:18.494 [2024-09-28 16:14:32.927343] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:18.494 request: 00:13:18.494 { 00:13:18.494 "base_bdev": "BaseBdev1", 00:13:18.494 "raid_bdev": "raid_bdev1", 00:13:18.494 "method": "bdev_raid_add_base_bdev", 00:13:18.494 "req_id": 1 00:13:18.494 } 00:13:18.494 Got JSON-RPC error response 00:13:18.494 response: 00:13:18.494 { 00:13:18.494 "code": -22, 00:13:18.494 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:18.494 } 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:18.494 16:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.433 "name": "raid_bdev1", 00:13:19.433 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:19.433 "strip_size_kb": 0, 00:13:19.433 "state": "online", 00:13:19.433 "raid_level": "raid1", 00:13:19.433 "superblock": true, 00:13:19.433 "num_base_bdevs": 2, 00:13:19.433 "num_base_bdevs_discovered": 1, 00:13:19.433 "num_base_bdevs_operational": 1, 00:13:19.433 "base_bdevs_list": [ 00:13:19.433 { 00:13:19.433 "name": null, 00:13:19.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.433 "is_configured": false, 00:13:19.433 "data_offset": 0, 00:13:19.433 "data_size": 63488 00:13:19.433 }, 00:13:19.433 { 00:13:19.433 "name": "BaseBdev2", 00:13:19.433 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:19.433 "is_configured": true, 00:13:19.433 "data_offset": 2048, 00:13:19.433 "data_size": 63488 00:13:19.433 } 00:13:19.433 ] 00:13:19.433 }' 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.433 16:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.003 "name": "raid_bdev1", 00:13:20.003 "uuid": "f789ce7d-2615-4922-b759-ef51c43ef6cb", 00:13:20.003 "strip_size_kb": 0, 00:13:20.003 "state": "online", 00:13:20.003 "raid_level": "raid1", 00:13:20.003 "superblock": true, 00:13:20.003 "num_base_bdevs": 2, 00:13:20.003 "num_base_bdevs_discovered": 1, 00:13:20.003 "num_base_bdevs_operational": 1, 00:13:20.003 "base_bdevs_list": [ 00:13:20.003 { 00:13:20.003 "name": null, 00:13:20.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.003 "is_configured": false, 00:13:20.003 "data_offset": 0, 00:13:20.003 "data_size": 63488 00:13:20.003 }, 00:13:20.003 { 00:13:20.003 "name": "BaseBdev2", 00:13:20.003 "uuid": "96d7798e-17f3-5c42-b90a-4353b222bbce", 00:13:20.003 "is_configured": true, 00:13:20.003 "data_offset": 2048, 00:13:20.003 "data_size": 63488 00:13:20.003 } 00:13:20.003 ] 00:13:20.003 }' 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75768 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75768 ']' 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75768 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75768 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75768' 00:13:20.003 killing process with pid 75768 00:13:20.003 Received shutdown signal, test time was about 60.000000 seconds 00:13:20.003 00:13:20.003 Latency(us) 00:13:20.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.003 =================================================================================================================== 00:13:20.003 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75768 00:13:20.003 [2024-09-28 16:14:34.574875] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.003 [2024-09-28 16:14:34.575036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.003 16:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75768 00:13:20.004 [2024-09-28 16:14:34.575091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.004 [2024-09-28 16:14:34.575104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:20.263 [2024-09-28 16:14:34.881812] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:21.655 00:13:21.655 real 0m23.475s 00:13:21.655 user 0m28.265s 00:13:21.655 sys 0m4.107s 00:13:21.655 ************************************ 00:13:21.655 END TEST raid_rebuild_test_sb 00:13:21.655 ************************************ 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.655 16:14:36 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:21.655 16:14:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:21.655 16:14:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.655 16:14:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.655 ************************************ 00:13:21.655 START TEST raid_rebuild_test_io 00:13:21.655 ************************************ 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:21.655 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76498 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76498 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76498 ']' 00:13:21.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.656 16:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.916 [2024-09-28 16:14:36.361087] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:21.916 [2024-09-28 16:14:36.361270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.916 Zero copy mechanism will not be used. 00:13:21.916 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76498 ] 00:13:21.916 [2024-09-28 16:14:36.525047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.176 [2024-09-28 16:14:36.757948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.436 [2024-09-28 16:14:36.985782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.436 [2024-09-28 16:14:36.985891] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 BaseBdev1_malloc 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 [2024-09-28 16:14:37.226286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.697 [2024-09-28 16:14:37.226406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.697 [2024-09-28 16:14:37.226450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:22.697 [2024-09-28 16:14:37.226489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.697 [2024-09-28 16:14:37.228887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.697 [2024-09-28 16:14:37.228977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.697 BaseBdev1 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 BaseBdev2_malloc 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 [2024-09-28 16:14:37.296692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:22.697 [2024-09-28 16:14:37.296749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.697 [2024-09-28 16:14:37.296769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:22.697 [2024-09-28 16:14:37.296782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.697 [2024-09-28 16:14:37.299080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.697 [2024-09-28 16:14:37.299127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.697 BaseBdev2 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 spare_malloc 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 spare_delay 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 [2024-09-28 16:14:37.369495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.697 [2024-09-28 16:14:37.369549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.697 [2024-09-28 16:14:37.369567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:22.697 [2024-09-28 16:14:37.369578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.697 [2024-09-28 16:14:37.371982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.697 [2024-09-28 16:14:37.372064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.697 spare 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.697 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.958 [2024-09-28 16:14:37.381519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.958 [2024-09-28 16:14:37.383559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.958 [2024-09-28 16:14:37.383645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:22.958 [2024-09-28 16:14:37.383657] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:22.958 [2024-09-28 16:14:37.383912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:22.958 [2024-09-28 16:14:37.384059] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:22.958 [2024-09-28 16:14:37.384067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:22.958 [2024-09-28 16:14:37.384202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.958 "name": "raid_bdev1", 00:13:22.958 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:22.958 "strip_size_kb": 0, 00:13:22.958 "state": "online", 00:13:22.958 "raid_level": "raid1", 00:13:22.958 "superblock": false, 00:13:22.958 "num_base_bdevs": 2, 00:13:22.958 "num_base_bdevs_discovered": 2, 00:13:22.958 "num_base_bdevs_operational": 2, 00:13:22.958 "base_bdevs_list": [ 00:13:22.958 { 00:13:22.958 "name": "BaseBdev1", 00:13:22.958 "uuid": "e781574c-bb41-5b24-bbc0-c1d8ad0a66d7", 00:13:22.958 "is_configured": true, 00:13:22.958 "data_offset": 0, 00:13:22.958 "data_size": 65536 00:13:22.958 }, 00:13:22.958 { 00:13:22.958 "name": "BaseBdev2", 00:13:22.958 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:22.958 "is_configured": true, 00:13:22.958 "data_offset": 0, 00:13:22.958 "data_size": 65536 00:13:22.958 } 00:13:22.958 ] 00:13:22.958 }' 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.958 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.218 [2024-09-28 16:14:37.845041] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:23.218 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:23.478 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.479 [2024-09-28 16:14:37.908638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.479 "name": "raid_bdev1", 00:13:23.479 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:23.479 "strip_size_kb": 0, 00:13:23.479 "state": "online", 00:13:23.479 "raid_level": "raid1", 00:13:23.479 "superblock": false, 00:13:23.479 "num_base_bdevs": 2, 00:13:23.479 "num_base_bdevs_discovered": 1, 00:13:23.479 "num_base_bdevs_operational": 1, 00:13:23.479 "base_bdevs_list": [ 00:13:23.479 { 00:13:23.479 "name": null, 00:13:23.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.479 "is_configured": false, 00:13:23.479 "data_offset": 0, 00:13:23.479 "data_size": 65536 00:13:23.479 }, 00:13:23.479 { 00:13:23.479 "name": "BaseBdev2", 00:13:23.479 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:23.479 "is_configured": true, 00:13:23.479 "data_offset": 0, 00:13:23.479 "data_size": 65536 00:13:23.479 } 00:13:23.479 ] 00:13:23.479 }' 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.479 16:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.479 [2024-09-28 16:14:37.993745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:23.479 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.479 Zero copy mechanism will not be used. 00:13:23.479 Running I/O for 60 seconds... 00:13:23.739 16:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.739 16:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.739 16:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.739 [2024-09-28 16:14:38.340398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.739 16:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.739 16:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:23.739 [2024-09-28 16:14:38.390698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:23.739 [2024-09-28 16:14:38.392903] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.999 [2024-09-28 16:14:38.516297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.999 [2024-09-28 16:14:38.517072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.259 [2024-09-28 16:14:38.732078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.259 [2024-09-28 16:14:38.732651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.519 170.00 IOPS, 510.00 MiB/s [2024-09-28 16:14:39.067954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.519 [2024-09-28 16:14:39.068741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.779 [2024-09-28 16:14:39.296499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:24.779 [2024-09-28 16:14:39.296885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.779 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.779 "name": "raid_bdev1", 00:13:24.779 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:24.779 "strip_size_kb": 0, 00:13:24.779 "state": "online", 00:13:24.779 "raid_level": "raid1", 00:13:24.779 "superblock": false, 00:13:24.779 "num_base_bdevs": 2, 00:13:24.779 "num_base_bdevs_discovered": 2, 00:13:24.779 "num_base_bdevs_operational": 2, 00:13:24.779 "process": { 00:13:24.779 "type": "rebuild", 00:13:24.779 "target": "spare", 00:13:24.779 "progress": { 00:13:24.779 "blocks": 10240, 00:13:24.779 "percent": 15 00:13:24.779 } 00:13:24.779 }, 00:13:24.779 "base_bdevs_list": [ 00:13:24.779 { 00:13:24.779 "name": "spare", 00:13:24.779 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:24.779 "is_configured": true, 00:13:24.779 "data_offset": 0, 00:13:24.780 "data_size": 65536 00:13:24.780 }, 00:13:24.780 { 00:13:24.780 "name": "BaseBdev2", 00:13:24.780 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:24.780 "is_configured": true, 00:13:24.780 "data_offset": 0, 00:13:24.780 "data_size": 65536 00:13:24.780 } 00:13:24.780 ] 00:13:24.780 }' 00:13:24.780 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.039 [2024-09-28 16:14:39.505168] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.039 [2024-09-28 16:14:39.513454] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:25.039 [2024-09-28 16:14:39.521256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.039 [2024-09-28 16:14:39.521291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.039 [2024-09-28 16:14:39.521306] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:25.039 [2024-09-28 16:14:39.564317] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.039 "name": "raid_bdev1", 00:13:25.039 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:25.039 "strip_size_kb": 0, 00:13:25.039 "state": "online", 00:13:25.039 "raid_level": "raid1", 00:13:25.039 "superblock": false, 00:13:25.039 "num_base_bdevs": 2, 00:13:25.039 "num_base_bdevs_discovered": 1, 00:13:25.039 "num_base_bdevs_operational": 1, 00:13:25.039 "base_bdevs_list": [ 00:13:25.039 { 00:13:25.039 "name": null, 00:13:25.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.039 "is_configured": false, 00:13:25.039 "data_offset": 0, 00:13:25.039 "data_size": 65536 00:13:25.039 }, 00:13:25.039 { 00:13:25.039 "name": "BaseBdev2", 00:13:25.039 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:25.039 "is_configured": true, 00:13:25.039 "data_offset": 0, 00:13:25.039 "data_size": 65536 00:13:25.039 } 00:13:25.039 ] 00:13:25.039 }' 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.039 16:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.609 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.609 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.609 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.609 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.609 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.610 180.50 IOPS, 541.50 MiB/s 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.610 "name": "raid_bdev1", 00:13:25.610 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:25.610 "strip_size_kb": 0, 00:13:25.610 "state": "online", 00:13:25.610 "raid_level": "raid1", 00:13:25.610 "superblock": false, 00:13:25.610 "num_base_bdevs": 2, 00:13:25.610 "num_base_bdevs_discovered": 1, 00:13:25.610 "num_base_bdevs_operational": 1, 00:13:25.610 "base_bdevs_list": [ 00:13:25.610 { 00:13:25.610 "name": null, 00:13:25.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.610 "is_configured": false, 00:13:25.610 "data_offset": 0, 00:13:25.610 "data_size": 65536 00:13:25.610 }, 00:13:25.610 { 00:13:25.610 "name": "BaseBdev2", 00:13:25.610 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:25.610 "is_configured": true, 00:13:25.610 "data_offset": 0, 00:13:25.610 "data_size": 65536 00:13:25.610 } 00:13:25.610 ] 00:13:25.610 }' 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.610 [2024-09-28 16:14:40.132196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.610 16:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:25.610 [2024-09-28 16:14:40.176161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:25.610 [2024-09-28 16:14:40.178361] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.610 [2024-09-28 16:14:40.291595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:25.610 [2024-09-28 16:14:40.292278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:25.869 [2024-09-28 16:14:40.507413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:25.869 [2024-09-28 16:14:40.507855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.439 [2024-09-28 16:14:40.827310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:26.439 155.33 IOPS, 466.00 MiB/s [2024-09-28 16:14:41.048681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:26.439 [2024-09-28 16:14:41.049142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.712 "name": "raid_bdev1", 00:13:26.712 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:26.712 "strip_size_kb": 0, 00:13:26.712 "state": "online", 00:13:26.712 "raid_level": "raid1", 00:13:26.712 "superblock": false, 00:13:26.712 "num_base_bdevs": 2, 00:13:26.712 "num_base_bdevs_discovered": 2, 00:13:26.712 "num_base_bdevs_operational": 2, 00:13:26.712 "process": { 00:13:26.712 "type": "rebuild", 00:13:26.712 "target": "spare", 00:13:26.712 "progress": { 00:13:26.712 "blocks": 10240, 00:13:26.712 "percent": 15 00:13:26.712 } 00:13:26.712 }, 00:13:26.712 "base_bdevs_list": [ 00:13:26.712 { 00:13:26.712 "name": "spare", 00:13:26.712 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:26.712 "is_configured": true, 00:13:26.712 "data_offset": 0, 00:13:26.712 "data_size": 65536 00:13:26.712 }, 00:13:26.712 { 00:13:26.712 "name": "BaseBdev2", 00:13:26.712 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:26.712 "is_configured": true, 00:13:26.712 "data_offset": 0, 00:13:26.712 "data_size": 65536 00:13:26.712 } 00:13:26.712 ] 00:13:26.712 }' 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.712 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.713 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.713 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.713 16:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.713 16:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.713 16:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.713 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.713 "name": "raid_bdev1", 00:13:26.713 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:26.713 "strip_size_kb": 0, 00:13:26.713 "state": "online", 00:13:26.713 "raid_level": "raid1", 00:13:26.713 "superblock": false, 00:13:26.713 "num_base_bdevs": 2, 00:13:26.713 "num_base_bdevs_discovered": 2, 00:13:26.713 "num_base_bdevs_operational": 2, 00:13:26.713 "process": { 00:13:26.713 "type": "rebuild", 00:13:26.713 "target": "spare", 00:13:26.713 "progress": { 00:13:26.713 "blocks": 12288, 00:13:26.713 "percent": 18 00:13:26.713 } 00:13:26.713 }, 00:13:26.713 "base_bdevs_list": [ 00:13:26.713 { 00:13:26.713 "name": "spare", 00:13:26.713 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:26.713 "is_configured": true, 00:13:26.713 "data_offset": 0, 00:13:26.713 "data_size": 65536 00:13:26.713 }, 00:13:26.713 { 00:13:26.713 "name": "BaseBdev2", 00:13:26.713 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:26.713 "is_configured": true, 00:13:26.713 "data_offset": 0, 00:13:26.713 "data_size": 65536 00:13:26.713 } 00:13:26.713 ] 00:13:26.713 }' 00:13:26.713 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.972 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.972 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.972 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.972 16:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.232 [2024-09-28 16:14:41.838766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:27.491 126.25 IOPS, 378.75 MiB/s [2024-09-28 16:14:42.061703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:27.491 [2024-09-28 16:14:42.063681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:27.750 [2024-09-28 16:14:42.277588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 [2024-09-28 16:14:42.492384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.011 "name": "raid_bdev1", 00:13:28.011 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:28.011 "strip_size_kb": 0, 00:13:28.011 "state": "online", 00:13:28.011 "raid_level": "raid1", 00:13:28.011 "superblock": false, 00:13:28.011 "num_base_bdevs": 2, 00:13:28.011 "num_base_bdevs_discovered": 2, 00:13:28.011 "num_base_bdevs_operational": 2, 00:13:28.011 "process": { 00:13:28.011 "type": "rebuild", 00:13:28.011 "target": "spare", 00:13:28.011 "progress": { 00:13:28.011 "blocks": 32768, 00:13:28.011 "percent": 50 00:13:28.011 } 00:13:28.011 }, 00:13:28.011 "base_bdevs_list": [ 00:13:28.011 { 00:13:28.011 "name": "spare", 00:13:28.011 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:28.011 "is_configured": true, 00:13:28.011 "data_offset": 0, 00:13:28.011 "data_size": 65536 00:13:28.011 }, 00:13:28.011 { 00:13:28.011 "name": "BaseBdev2", 00:13:28.011 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:28.011 "is_configured": true, 00:13:28.011 "data_offset": 0, 00:13:28.011 "data_size": 65536 00:13:28.011 } 00:13:28.011 ] 00:13:28.011 }' 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.011 16:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.275 [2024-09-28 16:14:42.705760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:28.275 [2024-09-28 16:14:42.706073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:29.104 112.20 IOPS, 336.60 MiB/s 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.104 [2024-09-28 16:14:43.675402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.104 "name": "raid_bdev1", 00:13:29.104 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:29.104 "strip_size_kb": 0, 00:13:29.104 "state": "online", 00:13:29.104 "raid_level": "raid1", 00:13:29.104 "superblock": false, 00:13:29.104 "num_base_bdevs": 2, 00:13:29.104 "num_base_bdevs_discovered": 2, 00:13:29.104 "num_base_bdevs_operational": 2, 00:13:29.104 "process": { 00:13:29.104 "type": "rebuild", 00:13:29.104 "target": "spare", 00:13:29.104 "progress": { 00:13:29.104 "blocks": 49152, 00:13:29.104 "percent": 75 00:13:29.104 } 00:13:29.104 }, 00:13:29.104 "base_bdevs_list": [ 00:13:29.104 { 00:13:29.104 "name": "spare", 00:13:29.104 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:29.104 "is_configured": true, 00:13:29.104 "data_offset": 0, 00:13:29.104 "data_size": 65536 00:13:29.104 }, 00:13:29.104 { 00:13:29.104 "name": "BaseBdev2", 00:13:29.104 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:29.104 "is_configured": true, 00:13:29.104 "data_offset": 0, 00:13:29.104 "data_size": 65536 00:13:29.104 } 00:13:29.104 ] 00:13:29.104 }' 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.104 16:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.624 99.67 IOPS, 299.00 MiB/s [2024-09-28 16:14:44.211217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:29.884 [2024-09-28 16:14:44.532493] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.144 [2024-09-28 16:14:44.637248] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:30.144 [2024-09-28 16:14:44.639824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.144 16:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.404 "name": "raid_bdev1", 00:13:30.404 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:30.404 "strip_size_kb": 0, 00:13:30.404 "state": "online", 00:13:30.404 "raid_level": "raid1", 00:13:30.404 "superblock": false, 00:13:30.404 "num_base_bdevs": 2, 00:13:30.404 "num_base_bdevs_discovered": 2, 00:13:30.404 "num_base_bdevs_operational": 2, 00:13:30.404 "base_bdevs_list": [ 00:13:30.404 { 00:13:30.404 "name": "spare", 00:13:30.404 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:30.404 "is_configured": true, 00:13:30.404 "data_offset": 0, 00:13:30.404 "data_size": 65536 00:13:30.404 }, 00:13:30.404 { 00:13:30.404 "name": "BaseBdev2", 00:13:30.404 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:30.404 "is_configured": true, 00:13:30.404 "data_offset": 0, 00:13:30.404 "data_size": 65536 00:13:30.404 } 00:13:30.404 ] 00:13:30.404 }' 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.404 91.14 IOPS, 273.43 MiB/s 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.404 "name": "raid_bdev1", 00:13:30.404 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:30.404 "strip_size_kb": 0, 00:13:30.404 "state": "online", 00:13:30.404 "raid_level": "raid1", 00:13:30.404 "superblock": false, 00:13:30.404 "num_base_bdevs": 2, 00:13:30.404 "num_base_bdevs_discovered": 2, 00:13:30.404 "num_base_bdevs_operational": 2, 00:13:30.404 "base_bdevs_list": [ 00:13:30.404 { 00:13:30.404 "name": "spare", 00:13:30.404 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:30.404 "is_configured": true, 00:13:30.404 "data_offset": 0, 00:13:30.404 "data_size": 65536 00:13:30.404 }, 00:13:30.404 { 00:13:30.404 "name": "BaseBdev2", 00:13:30.404 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:30.404 "is_configured": true, 00:13:30.404 "data_offset": 0, 00:13:30.404 "data_size": 65536 00:13:30.404 } 00:13:30.404 ] 00:13:30.404 }' 00:13:30.404 16:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.404 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.664 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.664 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.664 "name": "raid_bdev1", 00:13:30.664 "uuid": "c4b680d8-d59b-4910-9cea-8f6676e168a5", 00:13:30.664 "strip_size_kb": 0, 00:13:30.664 "state": "online", 00:13:30.664 "raid_level": "raid1", 00:13:30.664 "superblock": false, 00:13:30.664 "num_base_bdevs": 2, 00:13:30.664 "num_base_bdevs_discovered": 2, 00:13:30.664 "num_base_bdevs_operational": 2, 00:13:30.664 "base_bdevs_list": [ 00:13:30.664 { 00:13:30.664 "name": "spare", 00:13:30.664 "uuid": "033d4c2c-4819-5a89-b0cc-923b5f9c9d33", 00:13:30.664 "is_configured": true, 00:13:30.664 "data_offset": 0, 00:13:30.664 "data_size": 65536 00:13:30.664 }, 00:13:30.664 { 00:13:30.664 "name": "BaseBdev2", 00:13:30.664 "uuid": "009c4881-41b9-5740-abf3-3db832fcd258", 00:13:30.664 "is_configured": true, 00:13:30.664 "data_offset": 0, 00:13:30.664 "data_size": 65536 00:13:30.664 } 00:13:30.664 ] 00:13:30.664 }' 00:13:30.664 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.664 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.925 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.925 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.925 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.925 [2024-09-28 16:14:45.515322] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.925 [2024-09-28 16:14:45.515407] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.925 00:13:30.925 Latency(us) 00:13:30.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.925 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:30.925 raid_bdev1 : 7.63 86.29 258.86 0.00 0.00 16006.23 302.28 113557.58 00:13:30.925 =================================================================================================================== 00:13:30.925 Total : 86.29 258.86 0.00 0.00 16006.23 302.28 113557.58 00:13:31.185 [2024-09-28 16:14:45.627121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.185 [2024-09-28 16:14:45.627201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.185 [2024-09-28 16:14:45.627347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.185 [2024-09-28 16:14:45.627402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:31.185 { 00:13:31.185 "results": [ 00:13:31.185 { 00:13:31.185 "job": "raid_bdev1", 00:13:31.185 "core_mask": "0x1", 00:13:31.185 "workload": "randrw", 00:13:31.185 "percentage": 50, 00:13:31.185 "status": "finished", 00:13:31.185 "queue_depth": 2, 00:13:31.185 "io_size": 3145728, 00:13:31.185 "runtime": 7.625706, 00:13:31.185 "iops": 86.28709263116097, 00:13:31.185 "mibps": 258.8612778934829, 00:13:31.185 "io_failed": 0, 00:13:31.185 "io_timeout": 0, 00:13:31.185 "avg_latency_us": 16006.225943377445, 00:13:31.185 "min_latency_us": 302.2812227074236, 00:13:31.185 "max_latency_us": 113557.57554585153 00:13:31.185 } 00:13:31.185 ], 00:13:31.185 "core_count": 1 00:13:31.185 } 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.185 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:31.444 /dev/nbd0 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.444 1+0 records in 00:13:31.444 1+0 records out 00:13:31.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514974 s, 8.0 MB/s 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.444 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.445 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.445 16:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:31.704 /dev/nbd1 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.704 1+0 records in 00:13:31.704 1+0 records out 00:13:31.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535203 s, 7.7 MB/s 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.704 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:31.963 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.964 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:31.964 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.964 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76498 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76498 ']' 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76498 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76498 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.224 killing process with pid 76498 00:13:32.224 Received shutdown signal, test time was about 8.890757 seconds 00:13:32.224 00:13:32.224 Latency(us) 00:13:32.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.224 =================================================================================================================== 00:13:32.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76498' 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76498 00:13:32.224 [2024-09-28 16:14:46.869643] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.224 16:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76498 00:13:32.484 [2024-09-28 16:14:47.101895] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.865 00:13:33.865 real 0m12.195s 00:13:33.865 user 0m15.108s 00:13:33.865 sys 0m1.610s 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.865 ************************************ 00:13:33.865 END TEST raid_rebuild_test_io 00:13:33.865 ************************************ 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.865 16:14:48 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:33.865 16:14:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:33.865 16:14:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.865 16:14:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.865 ************************************ 00:13:33.865 START TEST raid_rebuild_test_sb_io 00:13:33.865 ************************************ 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:33.865 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76874 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76874 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76874 ']' 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.125 16:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.125 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.125 Zero copy mechanism will not be used. 00:13:34.125 [2024-09-28 16:14:48.636184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:34.125 [2024-09-28 16:14:48.636316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76874 ] 00:13:34.125 [2024-09-28 16:14:48.798494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.385 [2024-09-28 16:14:49.039535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.645 [2024-09-28 16:14:49.269981] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.645 [2024-09-28 16:14:49.270015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.908 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.908 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:34.908 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.908 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:34.908 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.908 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.908 BaseBdev1_malloc 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.909 [2024-09-28 16:14:49.510369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.909 [2024-09-28 16:14:49.510433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.909 [2024-09-28 16:14:49.510459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.909 [2024-09-28 16:14:49.510473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.909 [2024-09-28 16:14:49.512793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.909 [2024-09-28 16:14:49.512882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.909 BaseBdev1 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.909 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 BaseBdev2_malloc 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 [2024-09-28 16:14:49.598784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.169 [2024-09-28 16:14:49.598841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.169 [2024-09-28 16:14:49.598860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.169 [2024-09-28 16:14:49.598871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.169 [2024-09-28 16:14:49.601215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.169 [2024-09-28 16:14:49.601261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.169 BaseBdev2 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.169 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.170 spare_malloc 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.170 spare_delay 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.170 [2024-09-28 16:14:49.669959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.170 [2024-09-28 16:14:49.670013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.170 [2024-09-28 16:14:49.670031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:35.170 [2024-09-28 16:14:49.670042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.170 [2024-09-28 16:14:49.672415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.170 [2024-09-28 16:14:49.672450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.170 spare 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.170 [2024-09-28 16:14:49.682005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.170 [2024-09-28 16:14:49.684066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.170 [2024-09-28 16:14:49.684320] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.170 [2024-09-28 16:14:49.684340] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.170 [2024-09-28 16:14:49.684594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:35.170 [2024-09-28 16:14:49.684756] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.170 [2024-09-28 16:14:49.684765] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.170 [2024-09-28 16:14:49.684904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.170 "name": "raid_bdev1", 00:13:35.170 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:35.170 "strip_size_kb": 0, 00:13:35.170 "state": "online", 00:13:35.170 "raid_level": "raid1", 00:13:35.170 "superblock": true, 00:13:35.170 "num_base_bdevs": 2, 00:13:35.170 "num_base_bdevs_discovered": 2, 00:13:35.170 "num_base_bdevs_operational": 2, 00:13:35.170 "base_bdevs_list": [ 00:13:35.170 { 00:13:35.170 "name": "BaseBdev1", 00:13:35.170 "uuid": "e72e952b-fa0b-57aa-9b1b-4d605ab76eae", 00:13:35.170 "is_configured": true, 00:13:35.170 "data_offset": 2048, 00:13:35.170 "data_size": 63488 00:13:35.170 }, 00:13:35.170 { 00:13:35.170 "name": "BaseBdev2", 00:13:35.170 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:35.170 "is_configured": true, 00:13:35.170 "data_offset": 2048, 00:13:35.170 "data_size": 63488 00:13:35.170 } 00:13:35.170 ] 00:13:35.170 }' 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.170 16:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 [2024-09-28 16:14:50.173388] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 [2024-09-28 16:14:50.272915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.739 "name": "raid_bdev1", 00:13:35.739 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:35.739 "strip_size_kb": 0, 00:13:35.739 "state": "online", 00:13:35.739 "raid_level": "raid1", 00:13:35.739 "superblock": true, 00:13:35.739 "num_base_bdevs": 2, 00:13:35.739 "num_base_bdevs_discovered": 1, 00:13:35.739 "num_base_bdevs_operational": 1, 00:13:35.739 "base_bdevs_list": [ 00:13:35.739 { 00:13:35.739 "name": null, 00:13:35.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.739 "is_configured": false, 00:13:35.739 "data_offset": 0, 00:13:35.739 "data_size": 63488 00:13:35.739 }, 00:13:35.739 { 00:13:35.739 "name": "BaseBdev2", 00:13:35.739 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:35.739 "is_configured": true, 00:13:35.739 "data_offset": 2048, 00:13:35.739 "data_size": 63488 00:13:35.739 } 00:13:35.739 ] 00:13:35.739 }' 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.739 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 [2024-09-28 16:14:50.354326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:35.739 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:35.739 Zero copy mechanism will not be used. 00:13:35.739 Running I/O for 60 seconds... 00:13:36.307 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.308 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.308 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.308 [2024-09-28 16:14:50.723705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.308 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.308 16:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.308 [2024-09-28 16:14:50.784910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:36.308 [2024-09-28 16:14:50.787152] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.308 [2024-09-28 16:14:50.905322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.308 [2024-09-28 16:14:50.906089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.568 [2024-09-28 16:14:51.123234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.568 [2024-09-28 16:14:51.123506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.827 167.00 IOPS, 501.00 MiB/s [2024-09-28 16:14:51.445366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.086 [2024-09-28 16:14:51.654624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.346 "name": "raid_bdev1", 00:13:37.346 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:37.346 "strip_size_kb": 0, 00:13:37.346 "state": "online", 00:13:37.346 "raid_level": "raid1", 00:13:37.346 "superblock": true, 00:13:37.346 "num_base_bdevs": 2, 00:13:37.346 "num_base_bdevs_discovered": 2, 00:13:37.346 "num_base_bdevs_operational": 2, 00:13:37.346 "process": { 00:13:37.346 "type": "rebuild", 00:13:37.346 "target": "spare", 00:13:37.346 "progress": { 00:13:37.346 "blocks": 10240, 00:13:37.346 "percent": 16 00:13:37.346 } 00:13:37.346 }, 00:13:37.346 "base_bdevs_list": [ 00:13:37.346 { 00:13:37.346 "name": "spare", 00:13:37.346 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:37.346 "is_configured": true, 00:13:37.346 "data_offset": 2048, 00:13:37.346 "data_size": 63488 00:13:37.346 }, 00:13:37.346 { 00:13:37.346 "name": "BaseBdev2", 00:13:37.346 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:37.346 "is_configured": true, 00:13:37.346 "data_offset": 2048, 00:13:37.346 "data_size": 63488 00:13:37.346 } 00:13:37.346 ] 00:13:37.346 }' 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.346 16:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.346 [2024-09-28 16:14:51.918724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.346 [2024-09-28 16:14:52.002381] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.346 [2024-09-28 16:14:52.009733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.346 [2024-09-28 16:14:52.009821] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.346 [2024-09-28 16:14:52.009840] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.604 [2024-09-28 16:14:52.049400] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.604 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.605 "name": "raid_bdev1", 00:13:37.605 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:37.605 "strip_size_kb": 0, 00:13:37.605 "state": "online", 00:13:37.605 "raid_level": "raid1", 00:13:37.605 "superblock": true, 00:13:37.605 "num_base_bdevs": 2, 00:13:37.605 "num_base_bdevs_discovered": 1, 00:13:37.605 "num_base_bdevs_operational": 1, 00:13:37.605 "base_bdevs_list": [ 00:13:37.605 { 00:13:37.605 "name": null, 00:13:37.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.605 "is_configured": false, 00:13:37.605 "data_offset": 0, 00:13:37.605 "data_size": 63488 00:13:37.605 }, 00:13:37.605 { 00:13:37.605 "name": "BaseBdev2", 00:13:37.605 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:37.605 "is_configured": true, 00:13:37.605 "data_offset": 2048, 00:13:37.605 "data_size": 63488 00:13:37.605 } 00:13:37.605 ] 00:13:37.605 }' 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.605 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.863 151.00 IOPS, 453.00 MiB/s 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.863 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.122 "name": "raid_bdev1", 00:13:38.122 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:38.122 "strip_size_kb": 0, 00:13:38.122 "state": "online", 00:13:38.122 "raid_level": "raid1", 00:13:38.122 "superblock": true, 00:13:38.122 "num_base_bdevs": 2, 00:13:38.122 "num_base_bdevs_discovered": 1, 00:13:38.122 "num_base_bdevs_operational": 1, 00:13:38.122 "base_bdevs_list": [ 00:13:38.122 { 00:13:38.122 "name": null, 00:13:38.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.122 "is_configured": false, 00:13:38.122 "data_offset": 0, 00:13:38.122 "data_size": 63488 00:13:38.122 }, 00:13:38.122 { 00:13:38.122 "name": "BaseBdev2", 00:13:38.122 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:38.122 "is_configured": true, 00:13:38.122 "data_offset": 2048, 00:13:38.122 "data_size": 63488 00:13:38.122 } 00:13:38.122 ] 00:13:38.122 }' 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.122 [2024-09-28 16:14:52.665271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.122 16:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.122 [2024-09-28 16:14:52.720032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:38.122 [2024-09-28 16:14:52.722188] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.381 [2024-09-28 16:14:52.829516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.381 [2024-09-28 16:14:52.829983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.381 [2024-09-28 16:14:52.956344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.381 [2024-09-28 16:14:52.956656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:39.208 161.00 IOPS, 483.00 MiB/s [2024-09-28 16:14:53.674684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:39.208 [2024-09-28 16:14:53.675399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.208 "name": "raid_bdev1", 00:13:39.208 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:39.208 "strip_size_kb": 0, 00:13:39.208 "state": "online", 00:13:39.208 "raid_level": "raid1", 00:13:39.208 "superblock": true, 00:13:39.208 "num_base_bdevs": 2, 00:13:39.208 "num_base_bdevs_discovered": 2, 00:13:39.208 "num_base_bdevs_operational": 2, 00:13:39.208 "process": { 00:13:39.208 "type": "rebuild", 00:13:39.208 "target": "spare", 00:13:39.208 "progress": { 00:13:39.208 "blocks": 14336, 00:13:39.208 "percent": 22 00:13:39.208 } 00:13:39.208 }, 00:13:39.208 "base_bdevs_list": [ 00:13:39.208 { 00:13:39.208 "name": "spare", 00:13:39.208 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:39.208 "is_configured": true, 00:13:39.208 "data_offset": 2048, 00:13:39.208 "data_size": 63488 00:13:39.208 }, 00:13:39.208 { 00:13:39.208 "name": "BaseBdev2", 00:13:39.208 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:39.208 "is_configured": true, 00:13:39.208 "data_offset": 2048, 00:13:39.208 "data_size": 63488 00:13:39.208 } 00:13:39.208 ] 00:13:39.208 }' 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:39.208 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:39.209 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.209 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.468 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.468 "name": "raid_bdev1", 00:13:39.468 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:39.468 "strip_size_kb": 0, 00:13:39.468 "state": "online", 00:13:39.468 "raid_level": "raid1", 00:13:39.468 "superblock": true, 00:13:39.468 "num_base_bdevs": 2, 00:13:39.468 "num_base_bdevs_discovered": 2, 00:13:39.468 "num_base_bdevs_operational": 2, 00:13:39.468 "process": { 00:13:39.468 "type": "rebuild", 00:13:39.468 "target": "spare", 00:13:39.468 "progress": { 00:13:39.468 "blocks": 14336, 00:13:39.468 "percent": 22 00:13:39.468 } 00:13:39.468 }, 00:13:39.468 "base_bdevs_list": [ 00:13:39.468 { 00:13:39.468 "name": "spare", 00:13:39.468 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:39.468 "is_configured": true, 00:13:39.468 "data_offset": 2048, 00:13:39.468 "data_size": 63488 00:13:39.468 }, 00:13:39.468 { 00:13:39.468 "name": "BaseBdev2", 00:13:39.468 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:39.468 "is_configured": true, 00:13:39.468 "data_offset": 2048, 00:13:39.468 "data_size": 63488 00:13:39.468 } 00:13:39.468 ] 00:13:39.468 }' 00:13:39.468 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.468 [2024-09-28 16:14:53.896389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.468 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.468 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.468 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.468 16:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.727 [2024-09-28 16:14:54.246328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:39.727 [2024-09-28 16:14:54.246760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:39.727 139.00 IOPS, 417.00 MiB/s [2024-09-28 16:14:54.359502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:39.727 [2024-09-28 16:14:54.359990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:40.665 16:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.665 16:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.665 16:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.665 16:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.665 16:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.665 16:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.665 "name": "raid_bdev1", 00:13:40.665 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:40.665 "strip_size_kb": 0, 00:13:40.665 "state": "online", 00:13:40.665 "raid_level": "raid1", 00:13:40.665 "superblock": true, 00:13:40.665 "num_base_bdevs": 2, 00:13:40.665 "num_base_bdevs_discovered": 2, 00:13:40.665 "num_base_bdevs_operational": 2, 00:13:40.665 "process": { 00:13:40.665 "type": "rebuild", 00:13:40.665 "target": "spare", 00:13:40.665 "progress": { 00:13:40.665 "blocks": 32768, 00:13:40.665 "percent": 51 00:13:40.665 } 00:13:40.665 }, 00:13:40.665 "base_bdevs_list": [ 00:13:40.665 { 00:13:40.665 "name": "spare", 00:13:40.665 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:40.665 "is_configured": true, 00:13:40.665 "data_offset": 2048, 00:13:40.665 "data_size": 63488 00:13:40.665 }, 00:13:40.665 { 00:13:40.665 "name": "BaseBdev2", 00:13:40.665 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:40.665 "is_configured": true, 00:13:40.665 "data_offset": 2048, 00:13:40.665 "data_size": 63488 00:13:40.665 } 00:13:40.665 ] 00:13:40.665 }' 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.665 16:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.184 121.80 IOPS, 365.40 MiB/s [2024-09-28 16:14:55.806740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.753 [2024-09-28 16:14:56.145220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.753 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.753 "name": "raid_bdev1", 00:13:41.753 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:41.753 "strip_size_kb": 0, 00:13:41.753 "state": "online", 00:13:41.754 "raid_level": "raid1", 00:13:41.754 "superblock": true, 00:13:41.754 "num_base_bdevs": 2, 00:13:41.754 "num_base_bdevs_discovered": 2, 00:13:41.754 "num_base_bdevs_operational": 2, 00:13:41.754 "process": { 00:13:41.754 "type": "rebuild", 00:13:41.754 "target": "spare", 00:13:41.754 "progress": { 00:13:41.754 "blocks": 49152, 00:13:41.754 "percent": 77 00:13:41.754 } 00:13:41.754 }, 00:13:41.754 "base_bdevs_list": [ 00:13:41.754 { 00:13:41.754 "name": "spare", 00:13:41.754 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:41.754 "is_configured": true, 00:13:41.754 "data_offset": 2048, 00:13:41.754 "data_size": 63488 00:13:41.754 }, 00:13:41.754 { 00:13:41.754 "name": "BaseBdev2", 00:13:41.754 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:41.754 "is_configured": true, 00:13:41.754 "data_offset": 2048, 00:13:41.754 "data_size": 63488 00:13:41.754 } 00:13:41.754 ] 00:13:41.754 }' 00:13:41.754 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.754 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.754 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.754 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.754 16:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.013 108.00 IOPS, 324.00 MiB/s [2024-09-28 16:14:56.683681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:42.583 [2024-09-28 16:14:57.014119] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.583 [2024-09-28 16:14:57.118886] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.583 [2024-09-28 16:14:57.121696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.844 "name": "raid_bdev1", 00:13:42.844 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:42.844 "strip_size_kb": 0, 00:13:42.844 "state": "online", 00:13:42.844 "raid_level": "raid1", 00:13:42.844 "superblock": true, 00:13:42.844 "num_base_bdevs": 2, 00:13:42.844 "num_base_bdevs_discovered": 2, 00:13:42.844 "num_base_bdevs_operational": 2, 00:13:42.844 "base_bdevs_list": [ 00:13:42.844 { 00:13:42.844 "name": "spare", 00:13:42.844 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:42.844 "is_configured": true, 00:13:42.844 "data_offset": 2048, 00:13:42.844 "data_size": 63488 00:13:42.844 }, 00:13:42.844 { 00:13:42.844 "name": "BaseBdev2", 00:13:42.844 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:42.844 "is_configured": true, 00:13:42.844 "data_offset": 2048, 00:13:42.844 "data_size": 63488 00:13:42.844 } 00:13:42.844 ] 00:13:42.844 }' 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.844 96.43 IOPS, 289.29 MiB/s 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.844 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.844 "name": "raid_bdev1", 00:13:42.844 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:42.845 "strip_size_kb": 0, 00:13:42.845 "state": "online", 00:13:42.845 "raid_level": "raid1", 00:13:42.845 "superblock": true, 00:13:42.845 "num_base_bdevs": 2, 00:13:42.845 "num_base_bdevs_discovered": 2, 00:13:42.845 "num_base_bdevs_operational": 2, 00:13:42.845 "base_bdevs_list": [ 00:13:42.845 { 00:13:42.845 "name": "spare", 00:13:42.845 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:42.845 "is_configured": true, 00:13:42.845 "data_offset": 2048, 00:13:42.845 "data_size": 63488 00:13:42.845 }, 00:13:42.845 { 00:13:42.845 "name": "BaseBdev2", 00:13:42.845 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:42.845 "is_configured": true, 00:13:42.845 "data_offset": 2048, 00:13:42.845 "data_size": 63488 00:13:42.845 } 00:13:42.845 ] 00:13:42.845 }' 00:13:42.845 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.845 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.845 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.106 "name": "raid_bdev1", 00:13:43.106 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:43.106 "strip_size_kb": 0, 00:13:43.106 "state": "online", 00:13:43.106 "raid_level": "raid1", 00:13:43.106 "superblock": true, 00:13:43.106 "num_base_bdevs": 2, 00:13:43.106 "num_base_bdevs_discovered": 2, 00:13:43.106 "num_base_bdevs_operational": 2, 00:13:43.106 "base_bdevs_list": [ 00:13:43.106 { 00:13:43.106 "name": "spare", 00:13:43.106 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:43.106 "is_configured": true, 00:13:43.106 "data_offset": 2048, 00:13:43.106 "data_size": 63488 00:13:43.106 }, 00:13:43.106 { 00:13:43.106 "name": "BaseBdev2", 00:13:43.106 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:43.106 "is_configured": true, 00:13:43.106 "data_offset": 2048, 00:13:43.106 "data_size": 63488 00:13:43.106 } 00:13:43.106 ] 00:13:43.106 }' 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.106 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.412 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.412 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.412 16:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.412 [2024-09-28 16:14:57.969481] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.412 [2024-09-28 16:14:57.969569] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.412 00:13:43.412 Latency(us) 00:13:43.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.412 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:43.412 raid_bdev1 : 7.67 91.69 275.06 0.00 0.00 14707.18 291.55 114015.47 00:13:43.412 =================================================================================================================== 00:13:43.412 Total : 91.69 275.06 0.00 0.00 14707.18 291.55 114015.47 00:13:43.412 [2024-09-28 16:14:58.029428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.412 { 00:13:43.412 "results": [ 00:13:43.412 { 00:13:43.412 "job": "raid_bdev1", 00:13:43.412 "core_mask": "0x1", 00:13:43.412 "workload": "randrw", 00:13:43.412 "percentage": 50, 00:13:43.412 "status": "finished", 00:13:43.412 "queue_depth": 2, 00:13:43.412 "io_size": 3145728, 00:13:43.412 "runtime": 7.66736, 00:13:43.412 "iops": 91.68736044740302, 00:13:43.412 "mibps": 275.06208134220907, 00:13:43.412 "io_failed": 0, 00:13:43.412 "io_timeout": 0, 00:13:43.412 "avg_latency_us": 14707.180997223379, 00:13:43.412 "min_latency_us": 291.54934497816595, 00:13:43.412 "max_latency_us": 114015.46899563319 00:13:43.412 } 00:13:43.412 ], 00:13:43.412 "core_count": 1 00:13:43.412 } 00:13:43.412 [2024-09-28 16:14:58.029523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.412 [2024-09-28 16:14:58.029611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.412 [2024-09-28 16:14:58.029621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.413 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.413 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.413 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.413 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.413 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.413 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.688 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:43.689 /dev/nbd0 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.689 1+0 records in 00:13:43.689 1+0 records out 00:13:43.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029527 s, 13.9 MB/s 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.689 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:43.967 /dev/nbd1 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.967 1+0 records in 00:13:43.967 1+0 records out 00:13:43.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339325 s, 12.1 MB/s 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.967 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:44.227 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.227 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.227 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.227 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.227 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.227 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.227 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.487 16:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.487 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 [2024-09-28 16:14:59.181443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.747 [2024-09-28 16:14:59.181542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.747 [2024-09-28 16:14:59.181572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:44.747 [2024-09-28 16:14:59.181581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.747 [2024-09-28 16:14:59.183996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.747 [2024-09-28 16:14:59.184044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.747 [2024-09-28 16:14:59.184126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:44.747 [2024-09-28 16:14:59.184172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.747 [2024-09-28 16:14:59.184313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.747 spare 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 [2024-09-28 16:14:59.284201] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:44.747 [2024-09-28 16:14:59.284239] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.747 [2024-09-28 16:14:59.284517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:44.747 [2024-09-28 16:14:59.284682] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:44.747 [2024-09-28 16:14:59.284698] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:44.747 [2024-09-28 16:14:59.284864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.747 "name": "raid_bdev1", 00:13:44.747 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:44.747 "strip_size_kb": 0, 00:13:44.747 "state": "online", 00:13:44.747 "raid_level": "raid1", 00:13:44.747 "superblock": true, 00:13:44.747 "num_base_bdevs": 2, 00:13:44.747 "num_base_bdevs_discovered": 2, 00:13:44.747 "num_base_bdevs_operational": 2, 00:13:44.747 "base_bdevs_list": [ 00:13:44.747 { 00:13:44.747 "name": "spare", 00:13:44.747 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:44.747 "is_configured": true, 00:13:44.747 "data_offset": 2048, 00:13:44.747 "data_size": 63488 00:13:44.747 }, 00:13:44.747 { 00:13:44.747 "name": "BaseBdev2", 00:13:44.747 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:44.747 "is_configured": true, 00:13:44.747 "data_offset": 2048, 00:13:44.747 "data_size": 63488 00:13:44.747 } 00:13:44.747 ] 00:13:44.747 }' 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.747 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.317 "name": "raid_bdev1", 00:13:45.317 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:45.317 "strip_size_kb": 0, 00:13:45.317 "state": "online", 00:13:45.317 "raid_level": "raid1", 00:13:45.317 "superblock": true, 00:13:45.317 "num_base_bdevs": 2, 00:13:45.317 "num_base_bdevs_discovered": 2, 00:13:45.317 "num_base_bdevs_operational": 2, 00:13:45.317 "base_bdevs_list": [ 00:13:45.317 { 00:13:45.317 "name": "spare", 00:13:45.317 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:45.317 "is_configured": true, 00:13:45.317 "data_offset": 2048, 00:13:45.317 "data_size": 63488 00:13:45.317 }, 00:13:45.317 { 00:13:45.317 "name": "BaseBdev2", 00:13:45.317 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:45.317 "is_configured": true, 00:13:45.317 "data_offset": 2048, 00:13:45.317 "data_size": 63488 00:13:45.317 } 00:13:45.317 ] 00:13:45.317 }' 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.317 [2024-09-28 16:14:59.924275] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.317 "name": "raid_bdev1", 00:13:45.317 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:45.317 "strip_size_kb": 0, 00:13:45.317 "state": "online", 00:13:45.317 "raid_level": "raid1", 00:13:45.317 "superblock": true, 00:13:45.317 "num_base_bdevs": 2, 00:13:45.317 "num_base_bdevs_discovered": 1, 00:13:45.317 "num_base_bdevs_operational": 1, 00:13:45.317 "base_bdevs_list": [ 00:13:45.317 { 00:13:45.317 "name": null, 00:13:45.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.317 "is_configured": false, 00:13:45.317 "data_offset": 0, 00:13:45.317 "data_size": 63488 00:13:45.317 }, 00:13:45.317 { 00:13:45.317 "name": "BaseBdev2", 00:13:45.317 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:45.317 "is_configured": true, 00:13:45.317 "data_offset": 2048, 00:13:45.317 "data_size": 63488 00:13:45.317 } 00:13:45.317 ] 00:13:45.317 }' 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.317 16:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.887 16:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:45.887 16:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.887 16:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.887 [2024-09-28 16:15:00.327629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.887 [2024-09-28 16:15:00.327840] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:45.887 [2024-09-28 16:15:00.327906] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.887 [2024-09-28 16:15:00.327973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.887 [2024-09-28 16:15:00.343881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:45.887 16:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.887 16:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:45.887 [2024-09-28 16:15:00.345954] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.827 "name": "raid_bdev1", 00:13:46.827 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:46.827 "strip_size_kb": 0, 00:13:46.827 "state": "online", 00:13:46.827 "raid_level": "raid1", 00:13:46.827 "superblock": true, 00:13:46.827 "num_base_bdevs": 2, 00:13:46.827 "num_base_bdevs_discovered": 2, 00:13:46.827 "num_base_bdevs_operational": 2, 00:13:46.827 "process": { 00:13:46.827 "type": "rebuild", 00:13:46.827 "target": "spare", 00:13:46.827 "progress": { 00:13:46.827 "blocks": 20480, 00:13:46.827 "percent": 32 00:13:46.827 } 00:13:46.827 }, 00:13:46.827 "base_bdevs_list": [ 00:13:46.827 { 00:13:46.827 "name": "spare", 00:13:46.827 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:46.827 "is_configured": true, 00:13:46.827 "data_offset": 2048, 00:13:46.827 "data_size": 63488 00:13:46.827 }, 00:13:46.827 { 00:13:46.827 "name": "BaseBdev2", 00:13:46.827 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:46.827 "is_configured": true, 00:13:46.827 "data_offset": 2048, 00:13:46.827 "data_size": 63488 00:13:46.827 } 00:13:46.827 ] 00:13:46.827 }' 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.827 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.827 [2024-09-28 16:15:01.510318] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.087 [2024-09-28 16:15:01.554141] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.087 [2024-09-28 16:15:01.554199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.087 [2024-09-28 16:15:01.554213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.087 [2024-09-28 16:15:01.554233] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.087 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.087 "name": "raid_bdev1", 00:13:47.087 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:47.087 "strip_size_kb": 0, 00:13:47.087 "state": "online", 00:13:47.087 "raid_level": "raid1", 00:13:47.087 "superblock": true, 00:13:47.087 "num_base_bdevs": 2, 00:13:47.087 "num_base_bdevs_discovered": 1, 00:13:47.087 "num_base_bdevs_operational": 1, 00:13:47.087 "base_bdevs_list": [ 00:13:47.087 { 00:13:47.087 "name": null, 00:13:47.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.088 "is_configured": false, 00:13:47.088 "data_offset": 0, 00:13:47.088 "data_size": 63488 00:13:47.088 }, 00:13:47.088 { 00:13:47.088 "name": "BaseBdev2", 00:13:47.088 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:47.088 "is_configured": true, 00:13:47.088 "data_offset": 2048, 00:13:47.088 "data_size": 63488 00:13:47.088 } 00:13:47.088 ] 00:13:47.088 }' 00:13:47.088 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.088 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.347 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.347 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.347 16:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.347 [2024-09-28 16:15:01.994116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.347 [2024-09-28 16:15:01.994251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.347 [2024-09-28 16:15:01.994293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:47.347 [2024-09-28 16:15:01.994328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.347 [2024-09-28 16:15:01.994880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.347 [2024-09-28 16:15:01.994944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.347 [2024-09-28 16:15:01.995076] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:47.347 [2024-09-28 16:15:01.995122] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:47.348 [2024-09-28 16:15:01.995165] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:47.348 [2024-09-28 16:15:01.995240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.348 [2024-09-28 16:15:02.010219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:47.348 spare 00:13:47.348 16:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.348 16:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:47.348 [2024-09-28 16:15:02.012306] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.728 "name": "raid_bdev1", 00:13:48.728 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:48.728 "strip_size_kb": 0, 00:13:48.728 "state": "online", 00:13:48.728 "raid_level": "raid1", 00:13:48.728 "superblock": true, 00:13:48.728 "num_base_bdevs": 2, 00:13:48.728 "num_base_bdevs_discovered": 2, 00:13:48.728 "num_base_bdevs_operational": 2, 00:13:48.728 "process": { 00:13:48.728 "type": "rebuild", 00:13:48.728 "target": "spare", 00:13:48.728 "progress": { 00:13:48.728 "blocks": 20480, 00:13:48.728 "percent": 32 00:13:48.728 } 00:13:48.728 }, 00:13:48.728 "base_bdevs_list": [ 00:13:48.728 { 00:13:48.728 "name": "spare", 00:13:48.728 "uuid": "310ba4c5-e2db-57ba-80d6-ef99089305df", 00:13:48.728 "is_configured": true, 00:13:48.728 "data_offset": 2048, 00:13:48.728 "data_size": 63488 00:13:48.728 }, 00:13:48.728 { 00:13:48.728 "name": "BaseBdev2", 00:13:48.728 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:48.728 "is_configured": true, 00:13:48.728 "data_offset": 2048, 00:13:48.728 "data_size": 63488 00:13:48.728 } 00:13:48.728 ] 00:13:48.728 }' 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.728 [2024-09-28 16:15:03.179829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.728 [2024-09-28 16:15:03.220495] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.728 [2024-09-28 16:15:03.220626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.728 [2024-09-28 16:15:03.220672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.728 [2024-09-28 16:15:03.220695] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.728 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.729 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.729 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.729 "name": "raid_bdev1", 00:13:48.729 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:48.729 "strip_size_kb": 0, 00:13:48.729 "state": "online", 00:13:48.729 "raid_level": "raid1", 00:13:48.729 "superblock": true, 00:13:48.729 "num_base_bdevs": 2, 00:13:48.729 "num_base_bdevs_discovered": 1, 00:13:48.729 "num_base_bdevs_operational": 1, 00:13:48.729 "base_bdevs_list": [ 00:13:48.729 { 00:13:48.729 "name": null, 00:13:48.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.729 "is_configured": false, 00:13:48.729 "data_offset": 0, 00:13:48.729 "data_size": 63488 00:13:48.729 }, 00:13:48.729 { 00:13:48.729 "name": "BaseBdev2", 00:13:48.729 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:48.729 "is_configured": true, 00:13:48.729 "data_offset": 2048, 00:13:48.729 "data_size": 63488 00:13:48.729 } 00:13:48.729 ] 00:13:48.729 }' 00:13:48.729 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.729 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.298 "name": "raid_bdev1", 00:13:49.298 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:49.298 "strip_size_kb": 0, 00:13:49.298 "state": "online", 00:13:49.298 "raid_level": "raid1", 00:13:49.298 "superblock": true, 00:13:49.298 "num_base_bdevs": 2, 00:13:49.298 "num_base_bdevs_discovered": 1, 00:13:49.298 "num_base_bdevs_operational": 1, 00:13:49.298 "base_bdevs_list": [ 00:13:49.298 { 00:13:49.298 "name": null, 00:13:49.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.298 "is_configured": false, 00:13:49.298 "data_offset": 0, 00:13:49.298 "data_size": 63488 00:13:49.298 }, 00:13:49.298 { 00:13:49.298 "name": "BaseBdev2", 00:13:49.298 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:49.298 "is_configured": true, 00:13:49.298 "data_offset": 2048, 00:13:49.298 "data_size": 63488 00:13:49.298 } 00:13:49.298 ] 00:13:49.298 }' 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.298 [2024-09-28 16:15:03.888527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:49.298 [2024-09-28 16:15:03.888576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.298 [2024-09-28 16:15:03.888600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:49.298 [2024-09-28 16:15:03.888609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.298 [2024-09-28 16:15:03.889115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.298 [2024-09-28 16:15:03.889132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.298 [2024-09-28 16:15:03.889210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:49.298 [2024-09-28 16:15:03.889235] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.298 [2024-09-28 16:15:03.889246] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.298 [2024-09-28 16:15:03.889257] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:49.298 BaseBdev1 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.298 16:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:50.238 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.238 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.238 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.238 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.239 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.498 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.498 "name": "raid_bdev1", 00:13:50.498 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:50.498 "strip_size_kb": 0, 00:13:50.498 "state": "online", 00:13:50.498 "raid_level": "raid1", 00:13:50.498 "superblock": true, 00:13:50.498 "num_base_bdevs": 2, 00:13:50.498 "num_base_bdevs_discovered": 1, 00:13:50.498 "num_base_bdevs_operational": 1, 00:13:50.498 "base_bdevs_list": [ 00:13:50.498 { 00:13:50.498 "name": null, 00:13:50.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.498 "is_configured": false, 00:13:50.498 "data_offset": 0, 00:13:50.498 "data_size": 63488 00:13:50.498 }, 00:13:50.498 { 00:13:50.498 "name": "BaseBdev2", 00:13:50.498 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:50.498 "is_configured": true, 00:13:50.498 "data_offset": 2048, 00:13:50.498 "data_size": 63488 00:13:50.498 } 00:13:50.498 ] 00:13:50.498 }' 00:13:50.498 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.498 16:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.758 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.758 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.758 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.759 "name": "raid_bdev1", 00:13:50.759 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:50.759 "strip_size_kb": 0, 00:13:50.759 "state": "online", 00:13:50.759 "raid_level": "raid1", 00:13:50.759 "superblock": true, 00:13:50.759 "num_base_bdevs": 2, 00:13:50.759 "num_base_bdevs_discovered": 1, 00:13:50.759 "num_base_bdevs_operational": 1, 00:13:50.759 "base_bdevs_list": [ 00:13:50.759 { 00:13:50.759 "name": null, 00:13:50.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.759 "is_configured": false, 00:13:50.759 "data_offset": 0, 00:13:50.759 "data_size": 63488 00:13:50.759 }, 00:13:50.759 { 00:13:50.759 "name": "BaseBdev2", 00:13:50.759 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:50.759 "is_configured": true, 00:13:50.759 "data_offset": 2048, 00:13:50.759 "data_size": 63488 00:13:50.759 } 00:13:50.759 ] 00:13:50.759 }' 00:13:50.759 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.018 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.019 [2024-09-28 16:15:05.510085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.019 [2024-09-28 16:15:05.510265] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:51.019 [2024-09-28 16:15:05.510283] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:51.019 request: 00:13:51.019 { 00:13:51.019 "base_bdev": "BaseBdev1", 00:13:51.019 "raid_bdev": "raid_bdev1", 00:13:51.019 "method": "bdev_raid_add_base_bdev", 00:13:51.019 "req_id": 1 00:13:51.019 } 00:13:51.019 Got JSON-RPC error response 00:13:51.019 response: 00:13:51.019 { 00:13:51.019 "code": -22, 00:13:51.019 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:51.019 } 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.019 16:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:51.957 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:51.957 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.957 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.958 "name": "raid_bdev1", 00:13:51.958 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:51.958 "strip_size_kb": 0, 00:13:51.958 "state": "online", 00:13:51.958 "raid_level": "raid1", 00:13:51.958 "superblock": true, 00:13:51.958 "num_base_bdevs": 2, 00:13:51.958 "num_base_bdevs_discovered": 1, 00:13:51.958 "num_base_bdevs_operational": 1, 00:13:51.958 "base_bdevs_list": [ 00:13:51.958 { 00:13:51.958 "name": null, 00:13:51.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.958 "is_configured": false, 00:13:51.958 "data_offset": 0, 00:13:51.958 "data_size": 63488 00:13:51.958 }, 00:13:51.958 { 00:13:51.958 "name": "BaseBdev2", 00:13:51.958 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:51.958 "is_configured": true, 00:13:51.958 "data_offset": 2048, 00:13:51.958 "data_size": 63488 00:13:51.958 } 00:13:51.958 ] 00:13:51.958 }' 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.958 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.527 16:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.527 "name": "raid_bdev1", 00:13:52.527 "uuid": "b7e24ed5-1547-4b4c-9223-abfdec233c6b", 00:13:52.527 "strip_size_kb": 0, 00:13:52.527 "state": "online", 00:13:52.527 "raid_level": "raid1", 00:13:52.527 "superblock": true, 00:13:52.527 "num_base_bdevs": 2, 00:13:52.527 "num_base_bdevs_discovered": 1, 00:13:52.527 "num_base_bdevs_operational": 1, 00:13:52.527 "base_bdevs_list": [ 00:13:52.527 { 00:13:52.527 "name": null, 00:13:52.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.527 "is_configured": false, 00:13:52.527 "data_offset": 0, 00:13:52.527 "data_size": 63488 00:13:52.527 }, 00:13:52.527 { 00:13:52.527 "name": "BaseBdev2", 00:13:52.527 "uuid": "dc68aaa5-1e20-53a9-8d10-2309620c95ae", 00:13:52.527 "is_configured": true, 00:13:52.527 "data_offset": 2048, 00:13:52.527 "data_size": 63488 00:13:52.527 } 00:13:52.527 ] 00:13:52.527 }' 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76874 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76874 ']' 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76874 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76874 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76874' 00:13:52.527 killing process with pid 76874 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76874 00:13:52.527 Received shutdown signal, test time was about 16.820045 seconds 00:13:52.527 00:13:52.527 Latency(us) 00:13:52.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.527 =================================================================================================================== 00:13:52.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:52.527 [2024-09-28 16:15:07.144364] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.527 [2024-09-28 16:15:07.144519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.527 16:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76874 00:13:52.527 [2024-09-28 16:15:07.144579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.527 [2024-09-28 16:15:07.144592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:52.787 [2024-09-28 16:15:07.381116] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:54.168 ************************************ 00:13:54.168 END TEST raid_rebuild_test_sb_io 00:13:54.168 ************************************ 00:13:54.168 00:13:54.168 real 0m20.204s 00:13:54.168 user 0m26.218s 00:13:54.168 sys 0m2.297s 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.168 16:15:08 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:54.168 16:15:08 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:54.168 16:15:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:54.168 16:15:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.168 16:15:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.168 ************************************ 00:13:54.168 START TEST raid_rebuild_test 00:13:54.168 ************************************ 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77568 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77568 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77568 ']' 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.168 16:15:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.428 [2024-09-28 16:15:08.921399] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:54.428 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:54.428 Zero copy mechanism will not be used. 00:13:54.428 [2024-09-28 16:15:08.921601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77568 ] 00:13:54.428 [2024-09-28 16:15:09.075299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.688 [2024-09-28 16:15:09.316615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.948 [2024-09-28 16:15:09.546668] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.948 [2024-09-28 16:15:09.546708] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.208 BaseBdev1_malloc 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.208 [2024-09-28 16:15:09.782546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.208 [2024-09-28 16:15:09.782618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.208 [2024-09-28 16:15:09.782643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:55.208 [2024-09-28 16:15:09.782658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.208 [2024-09-28 16:15:09.785062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.208 [2024-09-28 16:15:09.785102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.208 BaseBdev1 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.208 BaseBdev2_malloc 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.208 [2024-09-28 16:15:09.855498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:55.208 [2024-09-28 16:15:09.855560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.208 [2024-09-28 16:15:09.855580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:55.208 [2024-09-28 16:15:09.855595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.208 [2024-09-28 16:15:09.858064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.208 [2024-09-28 16:15:09.858102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.208 BaseBdev2 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:55.208 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.209 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.468 BaseBdev3_malloc 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.468 [2024-09-28 16:15:09.916467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:55.468 [2024-09-28 16:15:09.916563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.468 [2024-09-28 16:15:09.916588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:55.468 [2024-09-28 16:15:09.916599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.468 [2024-09-28 16:15:09.918959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.468 [2024-09-28 16:15:09.919006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:55.468 BaseBdev3 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.468 BaseBdev4_malloc 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.468 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.468 [2024-09-28 16:15:09.977159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:55.468 [2024-09-28 16:15:09.977209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.468 [2024-09-28 16:15:09.977239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:55.468 [2024-09-28 16:15:09.977251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.468 [2024-09-28 16:15:09.979587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.468 [2024-09-28 16:15:09.979685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:55.468 BaseBdev4 00:13:55.469 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.469 16:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:55.469 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.469 16:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.469 spare_malloc 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.469 spare_delay 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.469 [2024-09-28 16:15:10.049393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.469 [2024-09-28 16:15:10.049444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.469 [2024-09-28 16:15:10.049462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:55.469 [2024-09-28 16:15:10.049473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.469 [2024-09-28 16:15:10.051725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.469 [2024-09-28 16:15:10.051763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.469 spare 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.469 [2024-09-28 16:15:10.061439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.469 [2024-09-28 16:15:10.063425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.469 [2024-09-28 16:15:10.063494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.469 [2024-09-28 16:15:10.063547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.469 [2024-09-28 16:15:10.063623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:55.469 [2024-09-28 16:15:10.063634] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:55.469 [2024-09-28 16:15:10.063885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:55.469 [2024-09-28 16:15:10.064054] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:55.469 [2024-09-28 16:15:10.064064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:55.469 [2024-09-28 16:15:10.064211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.469 "name": "raid_bdev1", 00:13:55.469 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:13:55.469 "strip_size_kb": 0, 00:13:55.469 "state": "online", 00:13:55.469 "raid_level": "raid1", 00:13:55.469 "superblock": false, 00:13:55.469 "num_base_bdevs": 4, 00:13:55.469 "num_base_bdevs_discovered": 4, 00:13:55.469 "num_base_bdevs_operational": 4, 00:13:55.469 "base_bdevs_list": [ 00:13:55.469 { 00:13:55.469 "name": "BaseBdev1", 00:13:55.469 "uuid": "85315553-6cf0-5a33-82d3-d5817aee3f44", 00:13:55.469 "is_configured": true, 00:13:55.469 "data_offset": 0, 00:13:55.469 "data_size": 65536 00:13:55.469 }, 00:13:55.469 { 00:13:55.469 "name": "BaseBdev2", 00:13:55.469 "uuid": "8914baca-9804-54f5-9b9c-114f1c0eb737", 00:13:55.469 "is_configured": true, 00:13:55.469 "data_offset": 0, 00:13:55.469 "data_size": 65536 00:13:55.469 }, 00:13:55.469 { 00:13:55.469 "name": "BaseBdev3", 00:13:55.469 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:13:55.469 "is_configured": true, 00:13:55.469 "data_offset": 0, 00:13:55.469 "data_size": 65536 00:13:55.469 }, 00:13:55.469 { 00:13:55.469 "name": "BaseBdev4", 00:13:55.469 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:13:55.469 "is_configured": true, 00:13:55.469 "data_offset": 0, 00:13:55.469 "data_size": 65536 00:13:55.469 } 00:13:55.469 ] 00:13:55.469 }' 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.469 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.039 [2024-09-28 16:15:10.472928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.039 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:56.299 [2024-09-28 16:15:10.740255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:56.299 /dev/nbd0 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.299 1+0 records in 00:13:56.299 1+0 records out 00:13:56.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423353 s, 9.7 MB/s 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:56.299 16:15:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:02.876 65536+0 records in 00:14:02.876 65536+0 records out 00:14:02.876 33554432 bytes (34 MB, 32 MiB) copied, 5.43668 s, 6.2 MB/s 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.876 [2024-09-28 16:15:16.449289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.876 [2024-09-28 16:15:16.485244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.876 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.876 "name": "raid_bdev1", 00:14:02.877 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:02.877 "strip_size_kb": 0, 00:14:02.877 "state": "online", 00:14:02.877 "raid_level": "raid1", 00:14:02.877 "superblock": false, 00:14:02.877 "num_base_bdevs": 4, 00:14:02.877 "num_base_bdevs_discovered": 3, 00:14:02.877 "num_base_bdevs_operational": 3, 00:14:02.877 "base_bdevs_list": [ 00:14:02.877 { 00:14:02.877 "name": null, 00:14:02.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.877 "is_configured": false, 00:14:02.877 "data_offset": 0, 00:14:02.877 "data_size": 65536 00:14:02.877 }, 00:14:02.877 { 00:14:02.877 "name": "BaseBdev2", 00:14:02.877 "uuid": "8914baca-9804-54f5-9b9c-114f1c0eb737", 00:14:02.877 "is_configured": true, 00:14:02.877 "data_offset": 0, 00:14:02.877 "data_size": 65536 00:14:02.877 }, 00:14:02.877 { 00:14:02.877 "name": "BaseBdev3", 00:14:02.877 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:02.877 "is_configured": true, 00:14:02.877 "data_offset": 0, 00:14:02.877 "data_size": 65536 00:14:02.877 }, 00:14:02.877 { 00:14:02.877 "name": "BaseBdev4", 00:14:02.877 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:02.877 "is_configured": true, 00:14:02.877 "data_offset": 0, 00:14:02.877 "data_size": 65536 00:14:02.877 } 00:14:02.877 ] 00:14:02.877 }' 00:14:02.877 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.877 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.877 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.877 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 [2024-09-28 16:15:16.916446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.877 [2024-09-28 16:15:16.931455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:02.877 16:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.877 16:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:02.877 [2024-09-28 16:15:16.933590] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.462 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.462 "name": "raid_bdev1", 00:14:03.462 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:03.462 "strip_size_kb": 0, 00:14:03.462 "state": "online", 00:14:03.462 "raid_level": "raid1", 00:14:03.462 "superblock": false, 00:14:03.462 "num_base_bdevs": 4, 00:14:03.462 "num_base_bdevs_discovered": 4, 00:14:03.462 "num_base_bdevs_operational": 4, 00:14:03.462 "process": { 00:14:03.462 "type": "rebuild", 00:14:03.462 "target": "spare", 00:14:03.462 "progress": { 00:14:03.462 "blocks": 20480, 00:14:03.462 "percent": 31 00:14:03.462 } 00:14:03.462 }, 00:14:03.462 "base_bdevs_list": [ 00:14:03.462 { 00:14:03.462 "name": "spare", 00:14:03.462 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:03.462 "is_configured": true, 00:14:03.462 "data_offset": 0, 00:14:03.462 "data_size": 65536 00:14:03.462 }, 00:14:03.462 { 00:14:03.462 "name": "BaseBdev2", 00:14:03.462 "uuid": "8914baca-9804-54f5-9b9c-114f1c0eb737", 00:14:03.462 "is_configured": true, 00:14:03.462 "data_offset": 0, 00:14:03.462 "data_size": 65536 00:14:03.462 }, 00:14:03.462 { 00:14:03.462 "name": "BaseBdev3", 00:14:03.462 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:03.462 "is_configured": true, 00:14:03.462 "data_offset": 0, 00:14:03.462 "data_size": 65536 00:14:03.462 }, 00:14:03.462 { 00:14:03.462 "name": "BaseBdev4", 00:14:03.462 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:03.462 "is_configured": true, 00:14:03.462 "data_offset": 0, 00:14:03.462 "data_size": 65536 00:14:03.462 } 00:14:03.462 ] 00:14:03.462 }' 00:14:03.463 16:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.463 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.463 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.463 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.463 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.463 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.463 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.463 [2024-09-28 16:15:18.077500] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.463 [2024-09-28 16:15:18.142204] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.463 [2024-09-28 16:15:18.142277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.463 [2024-09-28 16:15:18.142293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.463 [2024-09-28 16:15:18.142303] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.724 "name": "raid_bdev1", 00:14:03.724 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:03.724 "strip_size_kb": 0, 00:14:03.724 "state": "online", 00:14:03.724 "raid_level": "raid1", 00:14:03.724 "superblock": false, 00:14:03.724 "num_base_bdevs": 4, 00:14:03.724 "num_base_bdevs_discovered": 3, 00:14:03.724 "num_base_bdevs_operational": 3, 00:14:03.724 "base_bdevs_list": [ 00:14:03.724 { 00:14:03.724 "name": null, 00:14:03.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.724 "is_configured": false, 00:14:03.724 "data_offset": 0, 00:14:03.724 "data_size": 65536 00:14:03.724 }, 00:14:03.724 { 00:14:03.724 "name": "BaseBdev2", 00:14:03.724 "uuid": "8914baca-9804-54f5-9b9c-114f1c0eb737", 00:14:03.724 "is_configured": true, 00:14:03.724 "data_offset": 0, 00:14:03.724 "data_size": 65536 00:14:03.724 }, 00:14:03.724 { 00:14:03.724 "name": "BaseBdev3", 00:14:03.724 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:03.724 "is_configured": true, 00:14:03.724 "data_offset": 0, 00:14:03.724 "data_size": 65536 00:14:03.724 }, 00:14:03.724 { 00:14:03.724 "name": "BaseBdev4", 00:14:03.724 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:03.724 "is_configured": true, 00:14:03.724 "data_offset": 0, 00:14:03.724 "data_size": 65536 00:14:03.724 } 00:14:03.724 ] 00:14:03.724 }' 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.724 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.984 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.984 "name": "raid_bdev1", 00:14:03.984 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:03.984 "strip_size_kb": 0, 00:14:03.984 "state": "online", 00:14:03.984 "raid_level": "raid1", 00:14:03.984 "superblock": false, 00:14:03.984 "num_base_bdevs": 4, 00:14:03.984 "num_base_bdevs_discovered": 3, 00:14:03.984 "num_base_bdevs_operational": 3, 00:14:03.984 "base_bdevs_list": [ 00:14:03.984 { 00:14:03.984 "name": null, 00:14:03.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.984 "is_configured": false, 00:14:03.984 "data_offset": 0, 00:14:03.984 "data_size": 65536 00:14:03.984 }, 00:14:03.984 { 00:14:03.985 "name": "BaseBdev2", 00:14:03.985 "uuid": "8914baca-9804-54f5-9b9c-114f1c0eb737", 00:14:03.985 "is_configured": true, 00:14:03.985 "data_offset": 0, 00:14:03.985 "data_size": 65536 00:14:03.985 }, 00:14:03.985 { 00:14:03.985 "name": "BaseBdev3", 00:14:03.985 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:03.985 "is_configured": true, 00:14:03.985 "data_offset": 0, 00:14:03.985 "data_size": 65536 00:14:03.985 }, 00:14:03.985 { 00:14:03.985 "name": "BaseBdev4", 00:14:03.985 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:03.985 "is_configured": true, 00:14:03.985 "data_offset": 0, 00:14:03.985 "data_size": 65536 00:14:03.985 } 00:14:03.985 ] 00:14:03.985 }' 00:14:03.985 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.985 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.985 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.245 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.245 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.245 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.245 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.245 [2024-09-28 16:15:18.718566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.245 [2024-09-28 16:15:18.731612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:04.245 16:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.245 16:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:04.245 [2024-09-28 16:15:18.733764] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.184 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.184 "name": "raid_bdev1", 00:14:05.184 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:05.184 "strip_size_kb": 0, 00:14:05.184 "state": "online", 00:14:05.184 "raid_level": "raid1", 00:14:05.184 "superblock": false, 00:14:05.184 "num_base_bdevs": 4, 00:14:05.184 "num_base_bdevs_discovered": 4, 00:14:05.184 "num_base_bdevs_operational": 4, 00:14:05.184 "process": { 00:14:05.184 "type": "rebuild", 00:14:05.184 "target": "spare", 00:14:05.184 "progress": { 00:14:05.184 "blocks": 20480, 00:14:05.184 "percent": 31 00:14:05.184 } 00:14:05.184 }, 00:14:05.184 "base_bdevs_list": [ 00:14:05.184 { 00:14:05.184 "name": "spare", 00:14:05.184 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:05.184 "is_configured": true, 00:14:05.184 "data_offset": 0, 00:14:05.184 "data_size": 65536 00:14:05.184 }, 00:14:05.184 { 00:14:05.184 "name": "BaseBdev2", 00:14:05.184 "uuid": "8914baca-9804-54f5-9b9c-114f1c0eb737", 00:14:05.184 "is_configured": true, 00:14:05.184 "data_offset": 0, 00:14:05.184 "data_size": 65536 00:14:05.184 }, 00:14:05.184 { 00:14:05.184 "name": "BaseBdev3", 00:14:05.184 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:05.184 "is_configured": true, 00:14:05.185 "data_offset": 0, 00:14:05.185 "data_size": 65536 00:14:05.185 }, 00:14:05.185 { 00:14:05.185 "name": "BaseBdev4", 00:14:05.185 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:05.185 "is_configured": true, 00:14:05.185 "data_offset": 0, 00:14:05.185 "data_size": 65536 00:14:05.185 } 00:14:05.185 ] 00:14:05.185 }' 00:14:05.185 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.185 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.185 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.445 [2024-09-28 16:15:19.901415] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:05.445 [2024-09-28 16:15:19.942142] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.445 "name": "raid_bdev1", 00:14:05.445 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:05.445 "strip_size_kb": 0, 00:14:05.445 "state": "online", 00:14:05.445 "raid_level": "raid1", 00:14:05.445 "superblock": false, 00:14:05.445 "num_base_bdevs": 4, 00:14:05.445 "num_base_bdevs_discovered": 3, 00:14:05.445 "num_base_bdevs_operational": 3, 00:14:05.445 "process": { 00:14:05.445 "type": "rebuild", 00:14:05.445 "target": "spare", 00:14:05.445 "progress": { 00:14:05.445 "blocks": 24576, 00:14:05.445 "percent": 37 00:14:05.445 } 00:14:05.445 }, 00:14:05.445 "base_bdevs_list": [ 00:14:05.445 { 00:14:05.445 "name": "spare", 00:14:05.445 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:05.445 "is_configured": true, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 }, 00:14:05.445 { 00:14:05.445 "name": null, 00:14:05.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.445 "is_configured": false, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 }, 00:14:05.445 { 00:14:05.445 "name": "BaseBdev3", 00:14:05.445 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:05.445 "is_configured": true, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 }, 00:14:05.445 { 00:14:05.445 "name": "BaseBdev4", 00:14:05.445 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:05.445 "is_configured": true, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 } 00:14:05.445 ] 00:14:05.445 }' 00:14:05.445 16:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.445 "name": "raid_bdev1", 00:14:05.445 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:05.445 "strip_size_kb": 0, 00:14:05.445 "state": "online", 00:14:05.445 "raid_level": "raid1", 00:14:05.445 "superblock": false, 00:14:05.445 "num_base_bdevs": 4, 00:14:05.445 "num_base_bdevs_discovered": 3, 00:14:05.445 "num_base_bdevs_operational": 3, 00:14:05.445 "process": { 00:14:05.445 "type": "rebuild", 00:14:05.445 "target": "spare", 00:14:05.445 "progress": { 00:14:05.445 "blocks": 26624, 00:14:05.445 "percent": 40 00:14:05.445 } 00:14:05.445 }, 00:14:05.445 "base_bdevs_list": [ 00:14:05.445 { 00:14:05.445 "name": "spare", 00:14:05.445 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:05.445 "is_configured": true, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 }, 00:14:05.445 { 00:14:05.445 "name": null, 00:14:05.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.445 "is_configured": false, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 }, 00:14:05.445 { 00:14:05.445 "name": "BaseBdev3", 00:14:05.445 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:05.445 "is_configured": true, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 }, 00:14:05.445 { 00:14:05.445 "name": "BaseBdev4", 00:14:05.445 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:05.445 "is_configured": true, 00:14:05.445 "data_offset": 0, 00:14:05.445 "data_size": 65536 00:14:05.445 } 00:14:05.445 ] 00:14:05.445 }' 00:14:05.445 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.706 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.706 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.706 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.706 16:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.646 "name": "raid_bdev1", 00:14:06.646 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:06.646 "strip_size_kb": 0, 00:14:06.646 "state": "online", 00:14:06.646 "raid_level": "raid1", 00:14:06.646 "superblock": false, 00:14:06.646 "num_base_bdevs": 4, 00:14:06.646 "num_base_bdevs_discovered": 3, 00:14:06.646 "num_base_bdevs_operational": 3, 00:14:06.646 "process": { 00:14:06.646 "type": "rebuild", 00:14:06.646 "target": "spare", 00:14:06.646 "progress": { 00:14:06.646 "blocks": 49152, 00:14:06.646 "percent": 75 00:14:06.646 } 00:14:06.646 }, 00:14:06.646 "base_bdevs_list": [ 00:14:06.646 { 00:14:06.646 "name": "spare", 00:14:06.646 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:06.646 "is_configured": true, 00:14:06.646 "data_offset": 0, 00:14:06.646 "data_size": 65536 00:14:06.646 }, 00:14:06.646 { 00:14:06.646 "name": null, 00:14:06.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.646 "is_configured": false, 00:14:06.646 "data_offset": 0, 00:14:06.646 "data_size": 65536 00:14:06.646 }, 00:14:06.646 { 00:14:06.646 "name": "BaseBdev3", 00:14:06.646 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:06.646 "is_configured": true, 00:14:06.646 "data_offset": 0, 00:14:06.646 "data_size": 65536 00:14:06.646 }, 00:14:06.646 { 00:14:06.646 "name": "BaseBdev4", 00:14:06.646 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:06.646 "is_configured": true, 00:14:06.646 "data_offset": 0, 00:14:06.646 "data_size": 65536 00:14:06.646 } 00:14:06.646 ] 00:14:06.646 }' 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.646 16:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.586 [2024-09-28 16:15:21.956310] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:07.586 [2024-09-28 16:15:21.956434] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:07.586 [2024-09-28 16:15:21.956510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.847 "name": "raid_bdev1", 00:14:07.847 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:07.847 "strip_size_kb": 0, 00:14:07.847 "state": "online", 00:14:07.847 "raid_level": "raid1", 00:14:07.847 "superblock": false, 00:14:07.847 "num_base_bdevs": 4, 00:14:07.847 "num_base_bdevs_discovered": 3, 00:14:07.847 "num_base_bdevs_operational": 3, 00:14:07.847 "base_bdevs_list": [ 00:14:07.847 { 00:14:07.847 "name": "spare", 00:14:07.847 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:07.847 "is_configured": true, 00:14:07.847 "data_offset": 0, 00:14:07.847 "data_size": 65536 00:14:07.847 }, 00:14:07.847 { 00:14:07.847 "name": null, 00:14:07.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.847 "is_configured": false, 00:14:07.847 "data_offset": 0, 00:14:07.847 "data_size": 65536 00:14:07.847 }, 00:14:07.847 { 00:14:07.847 "name": "BaseBdev3", 00:14:07.847 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:07.847 "is_configured": true, 00:14:07.847 "data_offset": 0, 00:14:07.847 "data_size": 65536 00:14:07.847 }, 00:14:07.847 { 00:14:07.847 "name": "BaseBdev4", 00:14:07.847 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:07.847 "is_configured": true, 00:14:07.847 "data_offset": 0, 00:14:07.847 "data_size": 65536 00:14:07.847 } 00:14:07.847 ] 00:14:07.847 }' 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.847 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.848 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.848 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.848 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.848 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.848 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.848 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.848 "name": "raid_bdev1", 00:14:07.848 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:07.848 "strip_size_kb": 0, 00:14:07.848 "state": "online", 00:14:07.848 "raid_level": "raid1", 00:14:07.848 "superblock": false, 00:14:07.848 "num_base_bdevs": 4, 00:14:07.848 "num_base_bdevs_discovered": 3, 00:14:07.848 "num_base_bdevs_operational": 3, 00:14:07.848 "base_bdevs_list": [ 00:14:07.848 { 00:14:07.848 "name": "spare", 00:14:07.848 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:07.848 "is_configured": true, 00:14:07.848 "data_offset": 0, 00:14:07.848 "data_size": 65536 00:14:07.848 }, 00:14:07.848 { 00:14:07.848 "name": null, 00:14:07.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.848 "is_configured": false, 00:14:07.848 "data_offset": 0, 00:14:07.848 "data_size": 65536 00:14:07.848 }, 00:14:07.848 { 00:14:07.848 "name": "BaseBdev3", 00:14:07.848 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:07.848 "is_configured": true, 00:14:07.848 "data_offset": 0, 00:14:07.848 "data_size": 65536 00:14:07.848 }, 00:14:07.848 { 00:14:07.848 "name": "BaseBdev4", 00:14:07.848 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:07.848 "is_configured": true, 00:14:07.848 "data_offset": 0, 00:14:07.848 "data_size": 65536 00:14:07.848 } 00:14:07.848 ] 00:14:07.848 }' 00:14:07.848 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.108 "name": "raid_bdev1", 00:14:08.108 "uuid": "d7c4f150-7555-4590-9f51-34d093e80a29", 00:14:08.108 "strip_size_kb": 0, 00:14:08.108 "state": "online", 00:14:08.108 "raid_level": "raid1", 00:14:08.108 "superblock": false, 00:14:08.108 "num_base_bdevs": 4, 00:14:08.108 "num_base_bdevs_discovered": 3, 00:14:08.108 "num_base_bdevs_operational": 3, 00:14:08.108 "base_bdevs_list": [ 00:14:08.108 { 00:14:08.108 "name": "spare", 00:14:08.108 "uuid": "262a8f3d-da14-5071-8e3c-0244efd00ce2", 00:14:08.108 "is_configured": true, 00:14:08.108 "data_offset": 0, 00:14:08.108 "data_size": 65536 00:14:08.108 }, 00:14:08.108 { 00:14:08.108 "name": null, 00:14:08.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.108 "is_configured": false, 00:14:08.108 "data_offset": 0, 00:14:08.108 "data_size": 65536 00:14:08.108 }, 00:14:08.108 { 00:14:08.108 "name": "BaseBdev3", 00:14:08.108 "uuid": "30264d84-fe3d-5642-aac2-4e1236946186", 00:14:08.108 "is_configured": true, 00:14:08.108 "data_offset": 0, 00:14:08.108 "data_size": 65536 00:14:08.108 }, 00:14:08.108 { 00:14:08.108 "name": "BaseBdev4", 00:14:08.108 "uuid": "b9164632-01c0-569b-9028-be9522e5c9eb", 00:14:08.108 "is_configured": true, 00:14:08.108 "data_offset": 0, 00:14:08.108 "data_size": 65536 00:14:08.108 } 00:14:08.108 ] 00:14:08.108 }' 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.108 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.368 16:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.368 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.368 16:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.368 [2024-09-28 16:15:22.998343] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.368 [2024-09-28 16:15:22.998377] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.368 [2024-09-28 16:15:22.998464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.368 [2024-09-28 16:15:22.998550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.368 [2024-09-28 16:15:22.998560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:08.368 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.368 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.368 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.368 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:08.368 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.368 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:08.629 /dev/nbd0 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.629 1+0 records in 00:14:08.629 1+0 records out 00:14:08.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019902 s, 20.6 MB/s 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.629 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:08.889 /dev/nbd1 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.889 1+0 records in 00:14:08.889 1+0 records out 00:14:08.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363151 s, 11.3 MB/s 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.889 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:09.149 16:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:09.149 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.149 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:09.149 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.149 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:09.149 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.149 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:09.408 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.409 16:15:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77568 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77568 ']' 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77568 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77568 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77568' 00:14:09.669 killing process with pid 77568 00:14:09.669 Received shutdown signal, test time was about 60.000000 seconds 00:14:09.669 00:14:09.669 Latency(us) 00:14:09.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.669 =================================================================================================================== 00:14:09.669 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77568 00:14:09.669 [2024-09-28 16:15:24.166684] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.669 16:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77568 00:14:10.239 [2024-09-28 16:15:24.676119] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.621 16:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:11.621 ************************************ 00:14:11.621 END TEST raid_rebuild_test 00:14:11.621 ************************************ 00:14:11.621 00:14:11.621 real 0m17.166s 00:14:11.621 user 0m18.578s 00:14:11.621 sys 0m3.309s 00:14:11.621 16:15:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.622 16:15:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.622 16:15:26 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:11.622 16:15:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:11.622 16:15:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.622 16:15:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.622 ************************************ 00:14:11.622 START TEST raid_rebuild_test_sb 00:14:11.622 ************************************ 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78003 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78003 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78003 ']' 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.622 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.622 Zero copy mechanism will not be used. 00:14:11.622 [2024-09-28 16:15:26.151634] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:11.622 [2024-09-28 16:15:26.151743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78003 ] 00:14:11.881 [2024-09-28 16:15:26.313418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.881 [2024-09-28 16:15:26.555711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.141 [2024-09-28 16:15:26.795659] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.141 [2024-09-28 16:15:26.795804] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.400 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.400 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:12.400 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.400 16:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.400 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.400 16:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.400 BaseBdev1_malloc 00:14:12.400 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.400 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.400 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.401 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.401 [2024-09-28 16:15:27.017514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.401 [2024-09-28 16:15:27.017877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.401 [2024-09-28 16:15:27.017968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.401 [2024-09-28 16:15:27.018025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.401 [2024-09-28 16:15:27.020433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.401 [2024-09-28 16:15:27.020539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.401 BaseBdev1 00:14:12.401 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.401 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.401 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.401 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.401 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 BaseBdev2_malloc 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 [2024-09-28 16:15:27.103693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.661 [2024-09-28 16:15:27.103923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.661 [2024-09-28 16:15:27.103984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.661 [2024-09-28 16:15:27.104034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.661 [2024-09-28 16:15:27.106400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.661 [2024-09-28 16:15:27.106526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.661 BaseBdev2 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 BaseBdev3_malloc 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 [2024-09-28 16:15:27.159963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:12.661 [2024-09-28 16:15:27.160400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.661 [2024-09-28 16:15:27.160496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:12.661 [2024-09-28 16:15:27.160574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.661 [2024-09-28 16:15:27.162890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.661 [2024-09-28 16:15:27.163040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:12.661 BaseBdev3 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 BaseBdev4_malloc 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 [2024-09-28 16:15:27.222671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:12.661 [2024-09-28 16:15:27.222725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.661 [2024-09-28 16:15:27.222746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:12.661 [2024-09-28 16:15:27.222757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.661 [2024-09-28 16:15:27.225036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.661 [2024-09-28 16:15:27.225075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:12.661 BaseBdev4 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 spare_malloc 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 spare_delay 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 [2024-09-28 16:15:27.290528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.661 [2024-09-28 16:15:27.290585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.661 [2024-09-28 16:15:27.290607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:12.661 [2024-09-28 16:15:27.290618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.661 [2024-09-28 16:15:27.292884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.661 [2024-09-28 16:15:27.293004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.661 spare 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 [2024-09-28 16:15:27.302572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.661 [2024-09-28 16:15:27.304596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.661 [2024-09-28 16:15:27.304666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.661 [2024-09-28 16:15:27.304717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.661 [2024-09-28 16:15:27.304891] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.661 [2024-09-28 16:15:27.304915] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.661 [2024-09-28 16:15:27.305152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:12.661 [2024-09-28 16:15:27.305331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.661 [2024-09-28 16:15:27.305342] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.661 [2024-09-28 16:15:27.305481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.921 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.921 "name": "raid_bdev1", 00:14:12.921 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:12.921 "strip_size_kb": 0, 00:14:12.921 "state": "online", 00:14:12.921 "raid_level": "raid1", 00:14:12.921 "superblock": true, 00:14:12.921 "num_base_bdevs": 4, 00:14:12.921 "num_base_bdevs_discovered": 4, 00:14:12.921 "num_base_bdevs_operational": 4, 00:14:12.921 "base_bdevs_list": [ 00:14:12.921 { 00:14:12.921 "name": "BaseBdev1", 00:14:12.921 "uuid": "ad362b37-fdd3-5752-8058-f5b9262a53bc", 00:14:12.921 "is_configured": true, 00:14:12.921 "data_offset": 2048, 00:14:12.921 "data_size": 63488 00:14:12.921 }, 00:14:12.921 { 00:14:12.921 "name": "BaseBdev2", 00:14:12.921 "uuid": "618621c5-99e6-53b0-a4cc-57bdc29a8b03", 00:14:12.921 "is_configured": true, 00:14:12.921 "data_offset": 2048, 00:14:12.921 "data_size": 63488 00:14:12.921 }, 00:14:12.921 { 00:14:12.921 "name": "BaseBdev3", 00:14:12.921 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:12.921 "is_configured": true, 00:14:12.921 "data_offset": 2048, 00:14:12.921 "data_size": 63488 00:14:12.921 }, 00:14:12.921 { 00:14:12.921 "name": "BaseBdev4", 00:14:12.921 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:12.921 "is_configured": true, 00:14:12.921 "data_offset": 2048, 00:14:12.921 "data_size": 63488 00:14:12.921 } 00:14:12.921 ] 00:14:12.921 }' 00:14:12.921 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.921 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.181 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.181 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.181 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.181 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.182 [2024-09-28 16:15:27.702174] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.182 16:15:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:13.441 [2024-09-28 16:15:27.973404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:13.441 /dev/nbd0 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.441 1+0 records in 00:14:13.441 1+0 records out 00:14:13.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595858 s, 6.9 MB/s 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:13.441 16:15:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:18.721 63488+0 records in 00:14:18.721 63488+0 records out 00:14:18.721 32505856 bytes (33 MB, 31 MiB) copied, 5.16177 s, 6.3 MB/s 00:14:18.721 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:18.721 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.721 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:18.721 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.721 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:18.721 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.721 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.981 [2024-09-28 16:15:33.405752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 [2024-09-28 16:15:33.433796] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.981 "name": "raid_bdev1", 00:14:18.981 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:18.981 "strip_size_kb": 0, 00:14:18.981 "state": "online", 00:14:18.981 "raid_level": "raid1", 00:14:18.981 "superblock": true, 00:14:18.981 "num_base_bdevs": 4, 00:14:18.981 "num_base_bdevs_discovered": 3, 00:14:18.981 "num_base_bdevs_operational": 3, 00:14:18.981 "base_bdevs_list": [ 00:14:18.981 { 00:14:18.981 "name": null, 00:14:18.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.981 "is_configured": false, 00:14:18.981 "data_offset": 0, 00:14:18.981 "data_size": 63488 00:14:18.981 }, 00:14:18.981 { 00:14:18.981 "name": "BaseBdev2", 00:14:18.981 "uuid": "618621c5-99e6-53b0-a4cc-57bdc29a8b03", 00:14:18.981 "is_configured": true, 00:14:18.981 "data_offset": 2048, 00:14:18.981 "data_size": 63488 00:14:18.981 }, 00:14:18.981 { 00:14:18.981 "name": "BaseBdev3", 00:14:18.981 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:18.981 "is_configured": true, 00:14:18.981 "data_offset": 2048, 00:14:18.981 "data_size": 63488 00:14:18.981 }, 00:14:18.981 { 00:14:18.981 "name": "BaseBdev4", 00:14:18.981 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:18.981 "is_configured": true, 00:14:18.981 "data_offset": 2048, 00:14:18.981 "data_size": 63488 00:14:18.981 } 00:14:18.981 ] 00:14:18.981 }' 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.981 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.241 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.241 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.241 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.241 [2024-09-28 16:15:33.901263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.241 [2024-09-28 16:15:33.916657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:19.241 16:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.241 16:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:19.241 [2024-09-28 16:15:33.918796] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.621 "name": "raid_bdev1", 00:14:20.621 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:20.621 "strip_size_kb": 0, 00:14:20.621 "state": "online", 00:14:20.621 "raid_level": "raid1", 00:14:20.621 "superblock": true, 00:14:20.621 "num_base_bdevs": 4, 00:14:20.621 "num_base_bdevs_discovered": 4, 00:14:20.621 "num_base_bdevs_operational": 4, 00:14:20.621 "process": { 00:14:20.621 "type": "rebuild", 00:14:20.621 "target": "spare", 00:14:20.621 "progress": { 00:14:20.621 "blocks": 20480, 00:14:20.621 "percent": 32 00:14:20.621 } 00:14:20.621 }, 00:14:20.621 "base_bdevs_list": [ 00:14:20.621 { 00:14:20.621 "name": "spare", 00:14:20.621 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:20.621 "is_configured": true, 00:14:20.621 "data_offset": 2048, 00:14:20.621 "data_size": 63488 00:14:20.621 }, 00:14:20.621 { 00:14:20.621 "name": "BaseBdev2", 00:14:20.621 "uuid": "618621c5-99e6-53b0-a4cc-57bdc29a8b03", 00:14:20.621 "is_configured": true, 00:14:20.621 "data_offset": 2048, 00:14:20.621 "data_size": 63488 00:14:20.621 }, 00:14:20.621 { 00:14:20.621 "name": "BaseBdev3", 00:14:20.621 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:20.621 "is_configured": true, 00:14:20.621 "data_offset": 2048, 00:14:20.621 "data_size": 63488 00:14:20.621 }, 00:14:20.621 { 00:14:20.621 "name": "BaseBdev4", 00:14:20.621 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:20.621 "is_configured": true, 00:14:20.621 "data_offset": 2048, 00:14:20.621 "data_size": 63488 00:14:20.621 } 00:14:20.621 ] 00:14:20.621 }' 00:14:20.621 16:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.621 [2024-09-28 16:15:35.086640] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.621 [2024-09-28 16:15:35.127419] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:20.621 [2024-09-28 16:15:35.127481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.621 [2024-09-28 16:15:35.127498] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.621 [2024-09-28 16:15:35.127508] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.621 "name": "raid_bdev1", 00:14:20.621 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:20.621 "strip_size_kb": 0, 00:14:20.621 "state": "online", 00:14:20.621 "raid_level": "raid1", 00:14:20.621 "superblock": true, 00:14:20.621 "num_base_bdevs": 4, 00:14:20.621 "num_base_bdevs_discovered": 3, 00:14:20.621 "num_base_bdevs_operational": 3, 00:14:20.621 "base_bdevs_list": [ 00:14:20.621 { 00:14:20.621 "name": null, 00:14:20.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.621 "is_configured": false, 00:14:20.621 "data_offset": 0, 00:14:20.621 "data_size": 63488 00:14:20.621 }, 00:14:20.621 { 00:14:20.621 "name": "BaseBdev2", 00:14:20.621 "uuid": "618621c5-99e6-53b0-a4cc-57bdc29a8b03", 00:14:20.621 "is_configured": true, 00:14:20.621 "data_offset": 2048, 00:14:20.621 "data_size": 63488 00:14:20.621 }, 00:14:20.621 { 00:14:20.621 "name": "BaseBdev3", 00:14:20.621 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:20.621 "is_configured": true, 00:14:20.621 "data_offset": 2048, 00:14:20.621 "data_size": 63488 00:14:20.621 }, 00:14:20.621 { 00:14:20.621 "name": "BaseBdev4", 00:14:20.621 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:20.621 "is_configured": true, 00:14:20.621 "data_offset": 2048, 00:14:20.621 "data_size": 63488 00:14:20.621 } 00:14:20.621 ] 00:14:20.621 }' 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.621 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.881 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.140 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.140 "name": "raid_bdev1", 00:14:21.140 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:21.140 "strip_size_kb": 0, 00:14:21.140 "state": "online", 00:14:21.140 "raid_level": "raid1", 00:14:21.140 "superblock": true, 00:14:21.140 "num_base_bdevs": 4, 00:14:21.140 "num_base_bdevs_discovered": 3, 00:14:21.140 "num_base_bdevs_operational": 3, 00:14:21.140 "base_bdevs_list": [ 00:14:21.140 { 00:14:21.140 "name": null, 00:14:21.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.140 "is_configured": false, 00:14:21.140 "data_offset": 0, 00:14:21.140 "data_size": 63488 00:14:21.140 }, 00:14:21.140 { 00:14:21.140 "name": "BaseBdev2", 00:14:21.140 "uuid": "618621c5-99e6-53b0-a4cc-57bdc29a8b03", 00:14:21.140 "is_configured": true, 00:14:21.140 "data_offset": 2048, 00:14:21.140 "data_size": 63488 00:14:21.140 }, 00:14:21.140 { 00:14:21.140 "name": "BaseBdev3", 00:14:21.140 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:21.140 "is_configured": true, 00:14:21.140 "data_offset": 2048, 00:14:21.140 "data_size": 63488 00:14:21.140 }, 00:14:21.140 { 00:14:21.140 "name": "BaseBdev4", 00:14:21.140 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:21.140 "is_configured": true, 00:14:21.140 "data_offset": 2048, 00:14:21.141 "data_size": 63488 00:14:21.141 } 00:14:21.141 ] 00:14:21.141 }' 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.141 [2024-09-28 16:15:35.682789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.141 [2024-09-28 16:15:35.695946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.141 16:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:21.141 [2024-09-28 16:15:35.698056] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.079 "name": "raid_bdev1", 00:14:22.079 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:22.079 "strip_size_kb": 0, 00:14:22.079 "state": "online", 00:14:22.079 "raid_level": "raid1", 00:14:22.079 "superblock": true, 00:14:22.079 "num_base_bdevs": 4, 00:14:22.079 "num_base_bdevs_discovered": 4, 00:14:22.079 "num_base_bdevs_operational": 4, 00:14:22.079 "process": { 00:14:22.079 "type": "rebuild", 00:14:22.079 "target": "spare", 00:14:22.079 "progress": { 00:14:22.079 "blocks": 20480, 00:14:22.079 "percent": 32 00:14:22.079 } 00:14:22.079 }, 00:14:22.079 "base_bdevs_list": [ 00:14:22.079 { 00:14:22.079 "name": "spare", 00:14:22.079 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:22.079 "is_configured": true, 00:14:22.079 "data_offset": 2048, 00:14:22.079 "data_size": 63488 00:14:22.079 }, 00:14:22.079 { 00:14:22.079 "name": "BaseBdev2", 00:14:22.079 "uuid": "618621c5-99e6-53b0-a4cc-57bdc29a8b03", 00:14:22.079 "is_configured": true, 00:14:22.079 "data_offset": 2048, 00:14:22.079 "data_size": 63488 00:14:22.079 }, 00:14:22.079 { 00:14:22.079 "name": "BaseBdev3", 00:14:22.079 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:22.079 "is_configured": true, 00:14:22.079 "data_offset": 2048, 00:14:22.079 "data_size": 63488 00:14:22.079 }, 00:14:22.079 { 00:14:22.079 "name": "BaseBdev4", 00:14:22.079 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:22.079 "is_configured": true, 00:14:22.079 "data_offset": 2048, 00:14:22.079 "data_size": 63488 00:14:22.079 } 00:14:22.079 ] 00:14:22.079 }' 00:14:22.079 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:22.339 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.339 16:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.339 [2024-09-28 16:15:36.841777] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.339 [2024-09-28 16:15:37.006347] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.339 16:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.598 16:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.598 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.598 "name": "raid_bdev1", 00:14:22.598 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:22.598 "strip_size_kb": 0, 00:14:22.598 "state": "online", 00:14:22.598 "raid_level": "raid1", 00:14:22.598 "superblock": true, 00:14:22.598 "num_base_bdevs": 4, 00:14:22.598 "num_base_bdevs_discovered": 3, 00:14:22.598 "num_base_bdevs_operational": 3, 00:14:22.598 "process": { 00:14:22.598 "type": "rebuild", 00:14:22.598 "target": "spare", 00:14:22.598 "progress": { 00:14:22.598 "blocks": 24576, 00:14:22.598 "percent": 38 00:14:22.598 } 00:14:22.598 }, 00:14:22.598 "base_bdevs_list": [ 00:14:22.598 { 00:14:22.599 "name": "spare", 00:14:22.599 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:22.599 "is_configured": true, 00:14:22.599 "data_offset": 2048, 00:14:22.599 "data_size": 63488 00:14:22.599 }, 00:14:22.599 { 00:14:22.599 "name": null, 00:14:22.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.599 "is_configured": false, 00:14:22.599 "data_offset": 0, 00:14:22.599 "data_size": 63488 00:14:22.599 }, 00:14:22.599 { 00:14:22.599 "name": "BaseBdev3", 00:14:22.599 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:22.599 "is_configured": true, 00:14:22.599 "data_offset": 2048, 00:14:22.599 "data_size": 63488 00:14:22.599 }, 00:14:22.599 { 00:14:22.599 "name": "BaseBdev4", 00:14:22.599 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:22.599 "is_configured": true, 00:14:22.599 "data_offset": 2048, 00:14:22.599 "data_size": 63488 00:14:22.599 } 00:14:22.599 ] 00:14:22.599 }' 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=470 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.599 "name": "raid_bdev1", 00:14:22.599 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:22.599 "strip_size_kb": 0, 00:14:22.599 "state": "online", 00:14:22.599 "raid_level": "raid1", 00:14:22.599 "superblock": true, 00:14:22.599 "num_base_bdevs": 4, 00:14:22.599 "num_base_bdevs_discovered": 3, 00:14:22.599 "num_base_bdevs_operational": 3, 00:14:22.599 "process": { 00:14:22.599 "type": "rebuild", 00:14:22.599 "target": "spare", 00:14:22.599 "progress": { 00:14:22.599 "blocks": 26624, 00:14:22.599 "percent": 41 00:14:22.599 } 00:14:22.599 }, 00:14:22.599 "base_bdevs_list": [ 00:14:22.599 { 00:14:22.599 "name": "spare", 00:14:22.599 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:22.599 "is_configured": true, 00:14:22.599 "data_offset": 2048, 00:14:22.599 "data_size": 63488 00:14:22.599 }, 00:14:22.599 { 00:14:22.599 "name": null, 00:14:22.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.599 "is_configured": false, 00:14:22.599 "data_offset": 0, 00:14:22.599 "data_size": 63488 00:14:22.599 }, 00:14:22.599 { 00:14:22.599 "name": "BaseBdev3", 00:14:22.599 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:22.599 "is_configured": true, 00:14:22.599 "data_offset": 2048, 00:14:22.599 "data_size": 63488 00:14:22.599 }, 00:14:22.599 { 00:14:22.599 "name": "BaseBdev4", 00:14:22.599 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:22.599 "is_configured": true, 00:14:22.599 "data_offset": 2048, 00:14:22.599 "data_size": 63488 00:14:22.599 } 00:14:22.599 ] 00:14:22.599 }' 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.599 16:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.981 "name": "raid_bdev1", 00:14:23.981 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:23.981 "strip_size_kb": 0, 00:14:23.981 "state": "online", 00:14:23.981 "raid_level": "raid1", 00:14:23.981 "superblock": true, 00:14:23.981 "num_base_bdevs": 4, 00:14:23.981 "num_base_bdevs_discovered": 3, 00:14:23.981 "num_base_bdevs_operational": 3, 00:14:23.981 "process": { 00:14:23.981 "type": "rebuild", 00:14:23.981 "target": "spare", 00:14:23.981 "progress": { 00:14:23.981 "blocks": 49152, 00:14:23.981 "percent": 77 00:14:23.981 } 00:14:23.981 }, 00:14:23.981 "base_bdevs_list": [ 00:14:23.981 { 00:14:23.981 "name": "spare", 00:14:23.981 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:23.981 "is_configured": true, 00:14:23.981 "data_offset": 2048, 00:14:23.981 "data_size": 63488 00:14:23.981 }, 00:14:23.981 { 00:14:23.981 "name": null, 00:14:23.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.981 "is_configured": false, 00:14:23.981 "data_offset": 0, 00:14:23.981 "data_size": 63488 00:14:23.981 }, 00:14:23.981 { 00:14:23.981 "name": "BaseBdev3", 00:14:23.981 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:23.981 "is_configured": true, 00:14:23.981 "data_offset": 2048, 00:14:23.981 "data_size": 63488 00:14:23.981 }, 00:14:23.981 { 00:14:23.981 "name": "BaseBdev4", 00:14:23.981 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:23.981 "is_configured": true, 00:14:23.981 "data_offset": 2048, 00:14:23.981 "data_size": 63488 00:14:23.981 } 00:14:23.981 ] 00:14:23.981 }' 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.981 16:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.240 [2024-09-28 16:15:38.919786] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:24.240 [2024-09-28 16:15:38.919911] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:24.240 [2024-09-28 16:15:38.920054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.809 "name": "raid_bdev1", 00:14:24.809 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:24.809 "strip_size_kb": 0, 00:14:24.809 "state": "online", 00:14:24.809 "raid_level": "raid1", 00:14:24.809 "superblock": true, 00:14:24.809 "num_base_bdevs": 4, 00:14:24.809 "num_base_bdevs_discovered": 3, 00:14:24.809 "num_base_bdevs_operational": 3, 00:14:24.809 "base_bdevs_list": [ 00:14:24.809 { 00:14:24.809 "name": "spare", 00:14:24.809 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:24.809 "is_configured": true, 00:14:24.809 "data_offset": 2048, 00:14:24.809 "data_size": 63488 00:14:24.809 }, 00:14:24.809 { 00:14:24.809 "name": null, 00:14:24.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.809 "is_configured": false, 00:14:24.809 "data_offset": 0, 00:14:24.809 "data_size": 63488 00:14:24.809 }, 00:14:24.809 { 00:14:24.809 "name": "BaseBdev3", 00:14:24.809 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:24.809 "is_configured": true, 00:14:24.809 "data_offset": 2048, 00:14:24.809 "data_size": 63488 00:14:24.809 }, 00:14:24.809 { 00:14:24.809 "name": "BaseBdev4", 00:14:24.809 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:24.809 "is_configured": true, 00:14:24.809 "data_offset": 2048, 00:14:24.809 "data_size": 63488 00:14:24.809 } 00:14:24.809 ] 00:14:24.809 }' 00:14:24.809 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.069 "name": "raid_bdev1", 00:14:25.069 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:25.069 "strip_size_kb": 0, 00:14:25.069 "state": "online", 00:14:25.069 "raid_level": "raid1", 00:14:25.069 "superblock": true, 00:14:25.069 "num_base_bdevs": 4, 00:14:25.069 "num_base_bdevs_discovered": 3, 00:14:25.069 "num_base_bdevs_operational": 3, 00:14:25.069 "base_bdevs_list": [ 00:14:25.069 { 00:14:25.069 "name": "spare", 00:14:25.069 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:25.069 "is_configured": true, 00:14:25.069 "data_offset": 2048, 00:14:25.069 "data_size": 63488 00:14:25.069 }, 00:14:25.069 { 00:14:25.069 "name": null, 00:14:25.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.069 "is_configured": false, 00:14:25.069 "data_offset": 0, 00:14:25.069 "data_size": 63488 00:14:25.069 }, 00:14:25.069 { 00:14:25.069 "name": "BaseBdev3", 00:14:25.069 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:25.069 "is_configured": true, 00:14:25.069 "data_offset": 2048, 00:14:25.069 "data_size": 63488 00:14:25.069 }, 00:14:25.069 { 00:14:25.069 "name": "BaseBdev4", 00:14:25.069 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:25.069 "is_configured": true, 00:14:25.069 "data_offset": 2048, 00:14:25.069 "data_size": 63488 00:14:25.069 } 00:14:25.069 ] 00:14:25.069 }' 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.069 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.069 "name": "raid_bdev1", 00:14:25.069 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:25.069 "strip_size_kb": 0, 00:14:25.069 "state": "online", 00:14:25.069 "raid_level": "raid1", 00:14:25.069 "superblock": true, 00:14:25.069 "num_base_bdevs": 4, 00:14:25.069 "num_base_bdevs_discovered": 3, 00:14:25.069 "num_base_bdevs_operational": 3, 00:14:25.069 "base_bdevs_list": [ 00:14:25.069 { 00:14:25.069 "name": "spare", 00:14:25.069 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:25.069 "is_configured": true, 00:14:25.069 "data_offset": 2048, 00:14:25.069 "data_size": 63488 00:14:25.069 }, 00:14:25.069 { 00:14:25.069 "name": null, 00:14:25.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.070 "is_configured": false, 00:14:25.070 "data_offset": 0, 00:14:25.070 "data_size": 63488 00:14:25.070 }, 00:14:25.070 { 00:14:25.070 "name": "BaseBdev3", 00:14:25.070 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:25.070 "is_configured": true, 00:14:25.070 "data_offset": 2048, 00:14:25.070 "data_size": 63488 00:14:25.070 }, 00:14:25.070 { 00:14:25.070 "name": "BaseBdev4", 00:14:25.070 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:25.070 "is_configured": true, 00:14:25.070 "data_offset": 2048, 00:14:25.070 "data_size": 63488 00:14:25.070 } 00:14:25.070 ] 00:14:25.070 }' 00:14:25.070 16:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.070 16:15:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.638 [2024-09-28 16:15:40.166807] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.638 [2024-09-28 16:15:40.166893] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.638 [2024-09-28 16:15:40.167022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.638 [2024-09-28 16:15:40.167132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.638 [2024-09-28 16:15:40.167200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.638 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:25.898 /dev/nbd0 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.898 1+0 records in 00:14:25.898 1+0 records out 00:14:25.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047178 s, 8.7 MB/s 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.898 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:26.158 /dev/nbd1 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.158 1+0 records in 00:14:26.158 1+0 records out 00:14:26.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305813 s, 13.4 MB/s 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.158 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:26.418 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:26.418 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.418 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.418 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.418 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:26.418 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.418 16:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.677 [2024-09-28 16:15:41.342624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.677 [2024-09-28 16:15:41.342687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.677 [2024-09-28 16:15:41.342716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:26.677 [2024-09-28 16:15:41.342727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.677 [2024-09-28 16:15:41.345259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.677 [2024-09-28 16:15:41.345360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.677 [2024-09-28 16:15:41.345468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:26.677 [2024-09-28 16:15:41.345533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.677 [2024-09-28 16:15:41.345688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.677 [2024-09-28 16:15:41.345810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:26.677 spare 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.677 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.937 [2024-09-28 16:15:41.445703] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:26.937 [2024-09-28 16:15:41.445728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.937 [2024-09-28 16:15:41.446012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:26.937 [2024-09-28 16:15:41.446173] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:26.937 [2024-09-28 16:15:41.446186] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:26.937 [2024-09-28 16:15:41.446383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.937 "name": "raid_bdev1", 00:14:26.937 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:26.937 "strip_size_kb": 0, 00:14:26.937 "state": "online", 00:14:26.937 "raid_level": "raid1", 00:14:26.937 "superblock": true, 00:14:26.937 "num_base_bdevs": 4, 00:14:26.937 "num_base_bdevs_discovered": 3, 00:14:26.937 "num_base_bdevs_operational": 3, 00:14:26.937 "base_bdevs_list": [ 00:14:26.937 { 00:14:26.937 "name": "spare", 00:14:26.937 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:26.937 "is_configured": true, 00:14:26.937 "data_offset": 2048, 00:14:26.937 "data_size": 63488 00:14:26.937 }, 00:14:26.937 { 00:14:26.937 "name": null, 00:14:26.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.937 "is_configured": false, 00:14:26.937 "data_offset": 2048, 00:14:26.937 "data_size": 63488 00:14:26.937 }, 00:14:26.937 { 00:14:26.937 "name": "BaseBdev3", 00:14:26.937 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:26.937 "is_configured": true, 00:14:26.937 "data_offset": 2048, 00:14:26.937 "data_size": 63488 00:14:26.937 }, 00:14:26.937 { 00:14:26.937 "name": "BaseBdev4", 00:14:26.937 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:26.937 "is_configured": true, 00:14:26.937 "data_offset": 2048, 00:14:26.937 "data_size": 63488 00:14:26.937 } 00:14:26.937 ] 00:14:26.937 }' 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.937 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.505 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.505 "name": "raid_bdev1", 00:14:27.505 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:27.505 "strip_size_kb": 0, 00:14:27.505 "state": "online", 00:14:27.505 "raid_level": "raid1", 00:14:27.505 "superblock": true, 00:14:27.505 "num_base_bdevs": 4, 00:14:27.505 "num_base_bdevs_discovered": 3, 00:14:27.505 "num_base_bdevs_operational": 3, 00:14:27.505 "base_bdevs_list": [ 00:14:27.505 { 00:14:27.505 "name": "spare", 00:14:27.505 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:27.505 "is_configured": true, 00:14:27.505 "data_offset": 2048, 00:14:27.505 "data_size": 63488 00:14:27.505 }, 00:14:27.505 { 00:14:27.505 "name": null, 00:14:27.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.506 "is_configured": false, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": "BaseBdev3", 00:14:27.506 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:27.506 "is_configured": true, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": "BaseBdev4", 00:14:27.506 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:27.506 "is_configured": true, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 } 00:14:27.506 ] 00:14:27.506 }' 00:14:27.506 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.506 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.506 16:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.506 [2024-09-28 16:15:42.081395] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.506 "name": "raid_bdev1", 00:14:27.506 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:27.506 "strip_size_kb": 0, 00:14:27.506 "state": "online", 00:14:27.506 "raid_level": "raid1", 00:14:27.506 "superblock": true, 00:14:27.506 "num_base_bdevs": 4, 00:14:27.506 "num_base_bdevs_discovered": 2, 00:14:27.506 "num_base_bdevs_operational": 2, 00:14:27.506 "base_bdevs_list": [ 00:14:27.506 { 00:14:27.506 "name": null, 00:14:27.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.506 "is_configured": false, 00:14:27.506 "data_offset": 0, 00:14:27.506 "data_size": 63488 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": null, 00:14:27.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.506 "is_configured": false, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": "BaseBdev3", 00:14:27.506 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:27.506 "is_configured": true, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 }, 00:14:27.506 { 00:14:27.506 "name": "BaseBdev4", 00:14:27.506 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:27.506 "is_configured": true, 00:14:27.506 "data_offset": 2048, 00:14:27.506 "data_size": 63488 00:14:27.506 } 00:14:27.506 ] 00:14:27.506 }' 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.506 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.077 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.077 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.077 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.077 [2024-09-28 16:15:42.532622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.077 [2024-09-28 16:15:42.532864] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:28.077 [2024-09-28 16:15:42.532921] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:28.077 [2024-09-28 16:15:42.532992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.077 [2024-09-28 16:15:42.545992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:28.077 16:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.077 16:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:28.077 [2024-09-28 16:15:42.548143] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.016 "name": "raid_bdev1", 00:14:29.016 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:29.016 "strip_size_kb": 0, 00:14:29.016 "state": "online", 00:14:29.016 "raid_level": "raid1", 00:14:29.016 "superblock": true, 00:14:29.016 "num_base_bdevs": 4, 00:14:29.016 "num_base_bdevs_discovered": 3, 00:14:29.016 "num_base_bdevs_operational": 3, 00:14:29.016 "process": { 00:14:29.016 "type": "rebuild", 00:14:29.016 "target": "spare", 00:14:29.016 "progress": { 00:14:29.016 "blocks": 20480, 00:14:29.016 "percent": 32 00:14:29.016 } 00:14:29.016 }, 00:14:29.016 "base_bdevs_list": [ 00:14:29.016 { 00:14:29.016 "name": "spare", 00:14:29.016 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:29.016 "is_configured": true, 00:14:29.016 "data_offset": 2048, 00:14:29.016 "data_size": 63488 00:14:29.016 }, 00:14:29.016 { 00:14:29.016 "name": null, 00:14:29.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.016 "is_configured": false, 00:14:29.016 "data_offset": 2048, 00:14:29.016 "data_size": 63488 00:14:29.016 }, 00:14:29.016 { 00:14:29.016 "name": "BaseBdev3", 00:14:29.016 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:29.016 "is_configured": true, 00:14:29.016 "data_offset": 2048, 00:14:29.016 "data_size": 63488 00:14:29.016 }, 00:14:29.016 { 00:14:29.016 "name": "BaseBdev4", 00:14:29.016 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:29.016 "is_configured": true, 00:14:29.016 "data_offset": 2048, 00:14:29.016 "data_size": 63488 00:14:29.016 } 00:14:29.016 ] 00:14:29.016 }' 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.016 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.017 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.017 [2024-09-28 16:15:43.691914] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.277 [2024-09-28 16:15:43.756543] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.277 [2024-09-28 16:15:43.756645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.277 [2024-09-28 16:15:43.756700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.277 [2024-09-28 16:15:43.756721] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.277 "name": "raid_bdev1", 00:14:29.277 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:29.277 "strip_size_kb": 0, 00:14:29.277 "state": "online", 00:14:29.277 "raid_level": "raid1", 00:14:29.277 "superblock": true, 00:14:29.277 "num_base_bdevs": 4, 00:14:29.277 "num_base_bdevs_discovered": 2, 00:14:29.277 "num_base_bdevs_operational": 2, 00:14:29.277 "base_bdevs_list": [ 00:14:29.277 { 00:14:29.277 "name": null, 00:14:29.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.277 "is_configured": false, 00:14:29.277 "data_offset": 0, 00:14:29.277 "data_size": 63488 00:14:29.277 }, 00:14:29.277 { 00:14:29.277 "name": null, 00:14:29.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.277 "is_configured": false, 00:14:29.277 "data_offset": 2048, 00:14:29.277 "data_size": 63488 00:14:29.277 }, 00:14:29.277 { 00:14:29.277 "name": "BaseBdev3", 00:14:29.277 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:29.277 "is_configured": true, 00:14:29.277 "data_offset": 2048, 00:14:29.277 "data_size": 63488 00:14:29.277 }, 00:14:29.277 { 00:14:29.277 "name": "BaseBdev4", 00:14:29.277 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:29.277 "is_configured": true, 00:14:29.277 "data_offset": 2048, 00:14:29.277 "data_size": 63488 00:14:29.277 } 00:14:29.277 ] 00:14:29.277 }' 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.277 16:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.847 16:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.847 16:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.847 16:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.847 [2024-09-28 16:15:44.248275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.847 [2024-09-28 16:15:44.248336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.847 [2024-09-28 16:15:44.248370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:29.847 [2024-09-28 16:15:44.248381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.847 [2024-09-28 16:15:44.248901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.847 [2024-09-28 16:15:44.248918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.847 [2024-09-28 16:15:44.249001] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:29.847 [2024-09-28 16:15:44.249014] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:29.847 [2024-09-28 16:15:44.249029] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:29.847 [2024-09-28 16:15:44.249054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.847 [2024-09-28 16:15:44.261931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:29.847 spare 00:14:29.847 16:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.847 16:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:29.847 [2024-09-28 16:15:44.264028] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.788 "name": "raid_bdev1", 00:14:30.788 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:30.788 "strip_size_kb": 0, 00:14:30.788 "state": "online", 00:14:30.788 "raid_level": "raid1", 00:14:30.788 "superblock": true, 00:14:30.788 "num_base_bdevs": 4, 00:14:30.788 "num_base_bdevs_discovered": 3, 00:14:30.788 "num_base_bdevs_operational": 3, 00:14:30.788 "process": { 00:14:30.788 "type": "rebuild", 00:14:30.788 "target": "spare", 00:14:30.788 "progress": { 00:14:30.788 "blocks": 20480, 00:14:30.788 "percent": 32 00:14:30.788 } 00:14:30.788 }, 00:14:30.788 "base_bdevs_list": [ 00:14:30.788 { 00:14:30.788 "name": "spare", 00:14:30.788 "uuid": "01798429-600a-53f2-a023-b7d613b1a546", 00:14:30.788 "is_configured": true, 00:14:30.788 "data_offset": 2048, 00:14:30.788 "data_size": 63488 00:14:30.788 }, 00:14:30.788 { 00:14:30.788 "name": null, 00:14:30.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.788 "is_configured": false, 00:14:30.788 "data_offset": 2048, 00:14:30.788 "data_size": 63488 00:14:30.788 }, 00:14:30.788 { 00:14:30.788 "name": "BaseBdev3", 00:14:30.788 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:30.788 "is_configured": true, 00:14:30.788 "data_offset": 2048, 00:14:30.788 "data_size": 63488 00:14:30.788 }, 00:14:30.788 { 00:14:30.788 "name": "BaseBdev4", 00:14:30.788 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:30.788 "is_configured": true, 00:14:30.788 "data_offset": 2048, 00:14:30.788 "data_size": 63488 00:14:30.788 } 00:14:30.788 ] 00:14:30.788 }' 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.788 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.788 [2024-09-28 16:15:45.431762] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.048 [2024-09-28 16:15:45.472433] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.048 [2024-09-28 16:15:45.472513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.048 [2024-09-28 16:15:45.472530] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.048 [2024-09-28 16:15:45.472540] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.048 "name": "raid_bdev1", 00:14:31.048 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:31.048 "strip_size_kb": 0, 00:14:31.048 "state": "online", 00:14:31.048 "raid_level": "raid1", 00:14:31.048 "superblock": true, 00:14:31.048 "num_base_bdevs": 4, 00:14:31.048 "num_base_bdevs_discovered": 2, 00:14:31.048 "num_base_bdevs_operational": 2, 00:14:31.048 "base_bdevs_list": [ 00:14:31.048 { 00:14:31.048 "name": null, 00:14:31.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.048 "is_configured": false, 00:14:31.048 "data_offset": 0, 00:14:31.048 "data_size": 63488 00:14:31.048 }, 00:14:31.048 { 00:14:31.048 "name": null, 00:14:31.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.048 "is_configured": false, 00:14:31.048 "data_offset": 2048, 00:14:31.048 "data_size": 63488 00:14:31.048 }, 00:14:31.048 { 00:14:31.048 "name": "BaseBdev3", 00:14:31.048 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:31.048 "is_configured": true, 00:14:31.048 "data_offset": 2048, 00:14:31.048 "data_size": 63488 00:14:31.048 }, 00:14:31.048 { 00:14:31.048 "name": "BaseBdev4", 00:14:31.048 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:31.048 "is_configured": true, 00:14:31.048 "data_offset": 2048, 00:14:31.048 "data_size": 63488 00:14:31.048 } 00:14:31.048 ] 00:14:31.048 }' 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.048 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.310 16:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.571 "name": "raid_bdev1", 00:14:31.571 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:31.571 "strip_size_kb": 0, 00:14:31.571 "state": "online", 00:14:31.571 "raid_level": "raid1", 00:14:31.571 "superblock": true, 00:14:31.571 "num_base_bdevs": 4, 00:14:31.571 "num_base_bdevs_discovered": 2, 00:14:31.571 "num_base_bdevs_operational": 2, 00:14:31.571 "base_bdevs_list": [ 00:14:31.571 { 00:14:31.571 "name": null, 00:14:31.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.571 "is_configured": false, 00:14:31.571 "data_offset": 0, 00:14:31.571 "data_size": 63488 00:14:31.571 }, 00:14:31.571 { 00:14:31.571 "name": null, 00:14:31.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.571 "is_configured": false, 00:14:31.571 "data_offset": 2048, 00:14:31.571 "data_size": 63488 00:14:31.571 }, 00:14:31.571 { 00:14:31.571 "name": "BaseBdev3", 00:14:31.571 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:31.571 "is_configured": true, 00:14:31.571 "data_offset": 2048, 00:14:31.571 "data_size": 63488 00:14:31.571 }, 00:14:31.571 { 00:14:31.571 "name": "BaseBdev4", 00:14:31.571 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:31.571 "is_configured": true, 00:14:31.571 "data_offset": 2048, 00:14:31.571 "data_size": 63488 00:14:31.571 } 00:14:31.571 ] 00:14:31.571 }' 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.571 [2024-09-28 16:15:46.113041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:31.571 [2024-09-28 16:15:46.113156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.571 [2024-09-28 16:15:46.113184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:31.571 [2024-09-28 16:15:46.113196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.571 [2024-09-28 16:15:46.113730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.571 [2024-09-28 16:15:46.113758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:31.571 [2024-09-28 16:15:46.113838] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:31.571 [2024-09-28 16:15:46.113853] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:31.571 [2024-09-28 16:15:46.113862] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.571 [2024-09-28 16:15:46.113877] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:31.571 BaseBdev1 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.571 16:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.510 "name": "raid_bdev1", 00:14:32.510 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:32.510 "strip_size_kb": 0, 00:14:32.510 "state": "online", 00:14:32.510 "raid_level": "raid1", 00:14:32.510 "superblock": true, 00:14:32.510 "num_base_bdevs": 4, 00:14:32.510 "num_base_bdevs_discovered": 2, 00:14:32.510 "num_base_bdevs_operational": 2, 00:14:32.510 "base_bdevs_list": [ 00:14:32.510 { 00:14:32.510 "name": null, 00:14:32.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.510 "is_configured": false, 00:14:32.510 "data_offset": 0, 00:14:32.510 "data_size": 63488 00:14:32.510 }, 00:14:32.510 { 00:14:32.510 "name": null, 00:14:32.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.510 "is_configured": false, 00:14:32.510 "data_offset": 2048, 00:14:32.510 "data_size": 63488 00:14:32.510 }, 00:14:32.510 { 00:14:32.510 "name": "BaseBdev3", 00:14:32.510 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:32.510 "is_configured": true, 00:14:32.510 "data_offset": 2048, 00:14:32.510 "data_size": 63488 00:14:32.510 }, 00:14:32.510 { 00:14:32.510 "name": "BaseBdev4", 00:14:32.510 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:32.510 "is_configured": true, 00:14:32.510 "data_offset": 2048, 00:14:32.510 "data_size": 63488 00:14:32.510 } 00:14:32.510 ] 00:14:32.510 }' 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.510 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.080 "name": "raid_bdev1", 00:14:33.080 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:33.080 "strip_size_kb": 0, 00:14:33.080 "state": "online", 00:14:33.080 "raid_level": "raid1", 00:14:33.080 "superblock": true, 00:14:33.080 "num_base_bdevs": 4, 00:14:33.080 "num_base_bdevs_discovered": 2, 00:14:33.080 "num_base_bdevs_operational": 2, 00:14:33.080 "base_bdevs_list": [ 00:14:33.080 { 00:14:33.080 "name": null, 00:14:33.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.080 "is_configured": false, 00:14:33.080 "data_offset": 0, 00:14:33.080 "data_size": 63488 00:14:33.080 }, 00:14:33.080 { 00:14:33.080 "name": null, 00:14:33.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.080 "is_configured": false, 00:14:33.080 "data_offset": 2048, 00:14:33.080 "data_size": 63488 00:14:33.080 }, 00:14:33.080 { 00:14:33.080 "name": "BaseBdev3", 00:14:33.080 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:33.080 "is_configured": true, 00:14:33.080 "data_offset": 2048, 00:14:33.080 "data_size": 63488 00:14:33.080 }, 00:14:33.080 { 00:14:33.080 "name": "BaseBdev4", 00:14:33.080 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:33.080 "is_configured": true, 00:14:33.080 "data_offset": 2048, 00:14:33.080 "data_size": 63488 00:14:33.080 } 00:14:33.080 ] 00:14:33.080 }' 00:14:33.080 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 [2024-09-28 16:15:47.718308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.081 [2024-09-28 16:15:47.718530] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:33.081 [2024-09-28 16:15:47.718549] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:33.081 request: 00:14:33.081 { 00:14:33.081 "base_bdev": "BaseBdev1", 00:14:33.081 "raid_bdev": "raid_bdev1", 00:14:33.081 "method": "bdev_raid_add_base_bdev", 00:14:33.081 "req_id": 1 00:14:33.081 } 00:14:33.081 Got JSON-RPC error response 00:14:33.081 response: 00:14:33.081 { 00:14:33.081 "code": -22, 00:14:33.081 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:33.081 } 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:33.081 16:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.463 "name": "raid_bdev1", 00:14:34.463 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:34.463 "strip_size_kb": 0, 00:14:34.463 "state": "online", 00:14:34.463 "raid_level": "raid1", 00:14:34.463 "superblock": true, 00:14:34.463 "num_base_bdevs": 4, 00:14:34.463 "num_base_bdevs_discovered": 2, 00:14:34.463 "num_base_bdevs_operational": 2, 00:14:34.463 "base_bdevs_list": [ 00:14:34.463 { 00:14:34.463 "name": null, 00:14:34.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.463 "is_configured": false, 00:14:34.463 "data_offset": 0, 00:14:34.463 "data_size": 63488 00:14:34.463 }, 00:14:34.463 { 00:14:34.463 "name": null, 00:14:34.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.463 "is_configured": false, 00:14:34.463 "data_offset": 2048, 00:14:34.463 "data_size": 63488 00:14:34.463 }, 00:14:34.463 { 00:14:34.463 "name": "BaseBdev3", 00:14:34.463 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:34.463 "is_configured": true, 00:14:34.463 "data_offset": 2048, 00:14:34.463 "data_size": 63488 00:14:34.463 }, 00:14:34.463 { 00:14:34.463 "name": "BaseBdev4", 00:14:34.463 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:34.463 "is_configured": true, 00:14:34.463 "data_offset": 2048, 00:14:34.463 "data_size": 63488 00:14:34.463 } 00:14:34.463 ] 00:14:34.463 }' 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.463 16:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.727 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.727 "name": "raid_bdev1", 00:14:34.727 "uuid": "9e6ff0c0-034f-4ead-9b69-5af889334beb", 00:14:34.727 "strip_size_kb": 0, 00:14:34.727 "state": "online", 00:14:34.727 "raid_level": "raid1", 00:14:34.727 "superblock": true, 00:14:34.727 "num_base_bdevs": 4, 00:14:34.727 "num_base_bdevs_discovered": 2, 00:14:34.727 "num_base_bdevs_operational": 2, 00:14:34.727 "base_bdevs_list": [ 00:14:34.727 { 00:14:34.727 "name": null, 00:14:34.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.728 "is_configured": false, 00:14:34.728 "data_offset": 0, 00:14:34.728 "data_size": 63488 00:14:34.728 }, 00:14:34.728 { 00:14:34.728 "name": null, 00:14:34.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.728 "is_configured": false, 00:14:34.728 "data_offset": 2048, 00:14:34.728 "data_size": 63488 00:14:34.728 }, 00:14:34.728 { 00:14:34.728 "name": "BaseBdev3", 00:14:34.728 "uuid": "6cacc0d0-72fe-5f9a-bf0e-a3daf4655419", 00:14:34.728 "is_configured": true, 00:14:34.728 "data_offset": 2048, 00:14:34.728 "data_size": 63488 00:14:34.728 }, 00:14:34.728 { 00:14:34.728 "name": "BaseBdev4", 00:14:34.728 "uuid": "63ce9acc-c75f-5b68-8c4e-e29ab06801cb", 00:14:34.728 "is_configured": true, 00:14:34.728 "data_offset": 2048, 00:14:34.728 "data_size": 63488 00:14:34.728 } 00:14:34.728 ] 00:14:34.728 }' 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78003 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78003 ']' 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78003 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78003 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:34.728 killing process with pid 78003 00:14:34.728 Received shutdown signal, test time was about 60.000000 seconds 00:14:34.728 00:14:34.728 Latency(us) 00:14:34.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.728 =================================================================================================================== 00:14:34.728 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78003' 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78003 00:14:34.728 [2024-09-28 16:15:49.339555] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.728 [2024-09-28 16:15:49.339680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.728 16:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78003 00:14:34.728 [2024-09-28 16:15:49.339751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.728 [2024-09-28 16:15:49.339763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:35.300 [2024-09-28 16:15:49.842540] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:36.827 00:14:36.827 real 0m25.097s 00:14:36.827 user 0m30.183s 00:14:36.827 sys 0m3.979s 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.827 ************************************ 00:14:36.827 END TEST raid_rebuild_test_sb 00:14:36.827 ************************************ 00:14:36.827 16:15:51 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:36.827 16:15:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:36.827 16:15:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:36.827 16:15:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.827 ************************************ 00:14:36.827 START TEST raid_rebuild_test_io 00:14:36.827 ************************************ 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.827 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78763 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78763 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78763 ']' 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.828 16:15:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.828 [2024-09-28 16:15:51.352994] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:36.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.828 Zero copy mechanism will not be used. 00:14:36.828 [2024-09-28 16:15:51.353226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78763 ] 00:14:37.088 [2024-09-28 16:15:51.522450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.088 [2024-09-28 16:15:51.767380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.347 [2024-09-28 16:15:51.995117] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.347 [2024-09-28 16:15:51.995253] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.608 BaseBdev1_malloc 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.608 [2024-09-28 16:15:52.223862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:37.608 [2024-09-28 16:15:52.224014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.608 [2024-09-28 16:15:52.224059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:37.608 [2024-09-28 16:15:52.224095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.608 [2024-09-28 16:15:52.226518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.608 [2024-09-28 16:15:52.226595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.608 BaseBdev1 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.608 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 BaseBdev2_malloc 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 [2024-09-28 16:15:52.304938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:37.869 [2024-09-28 16:15:52.305061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.869 [2024-09-28 16:15:52.305099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:37.869 [2024-09-28 16:15:52.305137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.869 [2024-09-28 16:15:52.307494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.869 [2024-09-28 16:15:52.307586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:37.869 BaseBdev2 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 BaseBdev3_malloc 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 [2024-09-28 16:15:52.365102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:37.869 [2024-09-28 16:15:52.365238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.869 [2024-09-28 16:15:52.365279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:37.869 [2024-09-28 16:15:52.365319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.869 [2024-09-28 16:15:52.367653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.869 [2024-09-28 16:15:52.367745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.869 BaseBdev3 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 BaseBdev4_malloc 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 [2024-09-28 16:15:52.425601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:37.869 [2024-09-28 16:15:52.425656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.869 [2024-09-28 16:15:52.425676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:37.869 [2024-09-28 16:15:52.425687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.869 [2024-09-28 16:15:52.427999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.869 [2024-09-28 16:15:52.428040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:37.869 BaseBdev4 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 spare_malloc 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 spare_delay 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 [2024-09-28 16:15:52.498823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.869 [2024-09-28 16:15:52.498954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.869 [2024-09-28 16:15:52.498977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:37.869 [2024-09-28 16:15:52.498988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.869 [2024-09-28 16:15:52.501360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.869 [2024-09-28 16:15:52.501397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.869 spare 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.869 [2024-09-28 16:15:52.510868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.869 [2024-09-28 16:15:52.512888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.869 [2024-09-28 16:15:52.512967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.869 [2024-09-28 16:15:52.513018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.869 [2024-09-28 16:15:52.513091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.869 [2024-09-28 16:15:52.513102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:37.869 [2024-09-28 16:15:52.513358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.869 [2024-09-28 16:15:52.513537] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.869 [2024-09-28 16:15:52.513548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.869 [2024-09-28 16:15:52.513725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.869 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.870 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.130 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.130 "name": "raid_bdev1", 00:14:38.130 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:38.130 "strip_size_kb": 0, 00:14:38.130 "state": "online", 00:14:38.130 "raid_level": "raid1", 00:14:38.130 "superblock": false, 00:14:38.130 "num_base_bdevs": 4, 00:14:38.130 "num_base_bdevs_discovered": 4, 00:14:38.130 "num_base_bdevs_operational": 4, 00:14:38.130 "base_bdevs_list": [ 00:14:38.130 { 00:14:38.130 "name": "BaseBdev1", 00:14:38.130 "uuid": "53bb802c-01b1-5126-8612-8744ddf881a9", 00:14:38.130 "is_configured": true, 00:14:38.130 "data_offset": 0, 00:14:38.130 "data_size": 65536 00:14:38.130 }, 00:14:38.130 { 00:14:38.130 "name": "BaseBdev2", 00:14:38.130 "uuid": "e55a346e-27ef-5e07-8656-d7764d56a241", 00:14:38.130 "is_configured": true, 00:14:38.130 "data_offset": 0, 00:14:38.130 "data_size": 65536 00:14:38.130 }, 00:14:38.130 { 00:14:38.130 "name": "BaseBdev3", 00:14:38.130 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:38.130 "is_configured": true, 00:14:38.130 "data_offset": 0, 00:14:38.130 "data_size": 65536 00:14:38.130 }, 00:14:38.130 { 00:14:38.130 "name": "BaseBdev4", 00:14:38.130 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:38.130 "is_configured": true, 00:14:38.130 "data_offset": 0, 00:14:38.130 "data_size": 65536 00:14:38.130 } 00:14:38.130 ] 00:14:38.130 }' 00:14:38.130 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.130 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.390 [2024-09-28 16:15:52.958379] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.390 16:15:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.390 [2024-09-28 16:15:53.033919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.390 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.650 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.650 "name": "raid_bdev1", 00:14:38.650 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:38.650 "strip_size_kb": 0, 00:14:38.650 "state": "online", 00:14:38.650 "raid_level": "raid1", 00:14:38.650 "superblock": false, 00:14:38.650 "num_base_bdevs": 4, 00:14:38.650 "num_base_bdevs_discovered": 3, 00:14:38.650 "num_base_bdevs_operational": 3, 00:14:38.650 "base_bdevs_list": [ 00:14:38.650 { 00:14:38.650 "name": null, 00:14:38.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.650 "is_configured": false, 00:14:38.650 "data_offset": 0, 00:14:38.650 "data_size": 65536 00:14:38.650 }, 00:14:38.650 { 00:14:38.650 "name": "BaseBdev2", 00:14:38.650 "uuid": "e55a346e-27ef-5e07-8656-d7764d56a241", 00:14:38.650 "is_configured": true, 00:14:38.650 "data_offset": 0, 00:14:38.650 "data_size": 65536 00:14:38.650 }, 00:14:38.650 { 00:14:38.650 "name": "BaseBdev3", 00:14:38.650 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:38.650 "is_configured": true, 00:14:38.650 "data_offset": 0, 00:14:38.650 "data_size": 65536 00:14:38.650 }, 00:14:38.650 { 00:14:38.650 "name": "BaseBdev4", 00:14:38.650 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:38.650 "is_configured": true, 00:14:38.650 "data_offset": 0, 00:14:38.650 "data_size": 65536 00:14:38.650 } 00:14:38.650 ] 00:14:38.650 }' 00:14:38.650 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.650 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.650 [2024-09-28 16:15:53.131303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:38.650 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:38.650 Zero copy mechanism will not be used. 00:14:38.650 Running I/O for 60 seconds... 00:14:38.910 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.910 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.910 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.910 [2024-09-28 16:15:53.520705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.910 16:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.910 16:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:38.910 [2024-09-28 16:15:53.571255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:38.910 [2024-09-28 16:15:53.573566] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.169 [2024-09-28 16:15:53.698414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:39.170 [2024-09-28 16:15:53.699315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:39.430 [2024-09-28 16:15:53.927914] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.430 [2024-09-28 16:15:53.929131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.950 144.00 IOPS, 432.00 MiB/s 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.950 "name": "raid_bdev1", 00:14:39.950 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:39.950 "strip_size_kb": 0, 00:14:39.950 "state": "online", 00:14:39.950 "raid_level": "raid1", 00:14:39.950 "superblock": false, 00:14:39.950 "num_base_bdevs": 4, 00:14:39.950 "num_base_bdevs_discovered": 4, 00:14:39.950 "num_base_bdevs_operational": 4, 00:14:39.950 "process": { 00:14:39.950 "type": "rebuild", 00:14:39.950 "target": "spare", 00:14:39.950 "progress": { 00:14:39.950 "blocks": 12288, 00:14:39.950 "percent": 18 00:14:39.950 } 00:14:39.950 }, 00:14:39.950 "base_bdevs_list": [ 00:14:39.950 { 00:14:39.950 "name": "spare", 00:14:39.950 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:39.950 "is_configured": true, 00:14:39.950 "data_offset": 0, 00:14:39.950 "data_size": 65536 00:14:39.950 }, 00:14:39.950 { 00:14:39.950 "name": "BaseBdev2", 00:14:39.950 "uuid": "e55a346e-27ef-5e07-8656-d7764d56a241", 00:14:39.950 "is_configured": true, 00:14:39.950 "data_offset": 0, 00:14:39.950 "data_size": 65536 00:14:39.950 }, 00:14:39.950 { 00:14:39.950 "name": "BaseBdev3", 00:14:39.950 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:39.950 "is_configured": true, 00:14:39.950 "data_offset": 0, 00:14:39.950 "data_size": 65536 00:14:39.950 }, 00:14:39.950 { 00:14:39.950 "name": "BaseBdev4", 00:14:39.950 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:39.950 "is_configured": true, 00:14:39.950 "data_offset": 0, 00:14:39.950 "data_size": 65536 00:14:39.950 } 00:14:39.950 ] 00:14:39.950 }' 00:14:39.950 [2024-09-28 16:15:54.611969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:39.950 [2024-09-28 16:15:54.612519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:39.950 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.211 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.211 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.211 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.211 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.211 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.211 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 [2024-09-28 16:15:54.701424] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.211 [2024-09-28 16:15:54.725327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:40.211 [2024-09-28 16:15:54.830125] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.211 [2024-09-28 16:15:54.842734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.211 [2024-09-28 16:15:54.842781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.211 [2024-09-28 16:15:54.842795] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.211 [2024-09-28 16:15:54.879345] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.470 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.470 "name": "raid_bdev1", 00:14:40.470 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:40.470 "strip_size_kb": 0, 00:14:40.470 "state": "online", 00:14:40.470 "raid_level": "raid1", 00:14:40.470 "superblock": false, 00:14:40.470 "num_base_bdevs": 4, 00:14:40.470 "num_base_bdevs_discovered": 3, 00:14:40.470 "num_base_bdevs_operational": 3, 00:14:40.470 "base_bdevs_list": [ 00:14:40.470 { 00:14:40.470 "name": null, 00:14:40.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.470 "is_configured": false, 00:14:40.470 "data_offset": 0, 00:14:40.470 "data_size": 65536 00:14:40.470 }, 00:14:40.470 { 00:14:40.470 "name": "BaseBdev2", 00:14:40.470 "uuid": "e55a346e-27ef-5e07-8656-d7764d56a241", 00:14:40.470 "is_configured": true, 00:14:40.470 "data_offset": 0, 00:14:40.471 "data_size": 65536 00:14:40.471 }, 00:14:40.471 { 00:14:40.471 "name": "BaseBdev3", 00:14:40.471 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:40.471 "is_configured": true, 00:14:40.471 "data_offset": 0, 00:14:40.471 "data_size": 65536 00:14:40.471 }, 00:14:40.471 { 00:14:40.471 "name": "BaseBdev4", 00:14:40.471 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:40.471 "is_configured": true, 00:14:40.471 "data_offset": 0, 00:14:40.471 "data_size": 65536 00:14:40.471 } 00:14:40.471 ] 00:14:40.471 }' 00:14:40.471 16:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.471 16:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.730 121.50 IOPS, 364.50 MiB/s 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.730 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.730 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.730 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.730 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.730 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.730 16:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.730 16:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.731 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.731 16:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.731 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.731 "name": "raid_bdev1", 00:14:40.731 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:40.731 "strip_size_kb": 0, 00:14:40.731 "state": "online", 00:14:40.731 "raid_level": "raid1", 00:14:40.731 "superblock": false, 00:14:40.731 "num_base_bdevs": 4, 00:14:40.731 "num_base_bdevs_discovered": 3, 00:14:40.731 "num_base_bdevs_operational": 3, 00:14:40.731 "base_bdevs_list": [ 00:14:40.731 { 00:14:40.731 "name": null, 00:14:40.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.731 "is_configured": false, 00:14:40.731 "data_offset": 0, 00:14:40.731 "data_size": 65536 00:14:40.731 }, 00:14:40.731 { 00:14:40.731 "name": "BaseBdev2", 00:14:40.731 "uuid": "e55a346e-27ef-5e07-8656-d7764d56a241", 00:14:40.731 "is_configured": true, 00:14:40.731 "data_offset": 0, 00:14:40.731 "data_size": 65536 00:14:40.731 }, 00:14:40.731 { 00:14:40.731 "name": "BaseBdev3", 00:14:40.731 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:40.731 "is_configured": true, 00:14:40.731 "data_offset": 0, 00:14:40.731 "data_size": 65536 00:14:40.731 }, 00:14:40.731 { 00:14:40.731 "name": "BaseBdev4", 00:14:40.731 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:40.731 "is_configured": true, 00:14:40.731 "data_offset": 0, 00:14:40.731 "data_size": 65536 00:14:40.731 } 00:14:40.731 ] 00:14:40.731 }' 00:14:40.731 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.731 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.731 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.991 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.991 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.991 16:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.991 16:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.991 [2024-09-28 16:15:55.462541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.991 16:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.991 16:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:40.991 [2024-09-28 16:15:55.520367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:40.991 [2024-09-28 16:15:55.522653] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.991 [2024-09-28 16:15:55.633264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.991 [2024-09-28 16:15:55.635504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:41.251 [2024-09-28 16:15:55.851837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:41.251 [2024-09-28 16:15:55.852131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:41.770 123.00 IOPS, 369.00 MiB/s [2024-09-28 16:15:56.199960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:41.770 [2024-09-28 16:15:56.202072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:41.770 [2024-09-28 16:15:56.417736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:41.770 [2024-09-28 16:15:56.418714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.031 "name": "raid_bdev1", 00:14:42.031 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:42.031 "strip_size_kb": 0, 00:14:42.031 "state": "online", 00:14:42.031 "raid_level": "raid1", 00:14:42.031 "superblock": false, 00:14:42.031 "num_base_bdevs": 4, 00:14:42.031 "num_base_bdevs_discovered": 4, 00:14:42.031 "num_base_bdevs_operational": 4, 00:14:42.031 "process": { 00:14:42.031 "type": "rebuild", 00:14:42.031 "target": "spare", 00:14:42.031 "progress": { 00:14:42.031 "blocks": 10240, 00:14:42.031 "percent": 15 00:14:42.031 } 00:14:42.031 }, 00:14:42.031 "base_bdevs_list": [ 00:14:42.031 { 00:14:42.031 "name": "spare", 00:14:42.031 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:42.031 "is_configured": true, 00:14:42.031 "data_offset": 0, 00:14:42.031 "data_size": 65536 00:14:42.031 }, 00:14:42.031 { 00:14:42.031 "name": "BaseBdev2", 00:14:42.031 "uuid": "e55a346e-27ef-5e07-8656-d7764d56a241", 00:14:42.031 "is_configured": true, 00:14:42.031 "data_offset": 0, 00:14:42.031 "data_size": 65536 00:14:42.031 }, 00:14:42.031 { 00:14:42.031 "name": "BaseBdev3", 00:14:42.031 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:42.031 "is_configured": true, 00:14:42.031 "data_offset": 0, 00:14:42.031 "data_size": 65536 00:14:42.031 }, 00:14:42.031 { 00:14:42.031 "name": "BaseBdev4", 00:14:42.031 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:42.031 "is_configured": true, 00:14:42.031 "data_offset": 0, 00:14:42.031 "data_size": 65536 00:14:42.031 } 00:14:42.031 ] 00:14:42.031 }' 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.031 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.031 [2024-09-28 16:15:56.668000] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.291 [2024-09-28 16:15:56.754063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:42.291 [2024-09-28 16:15:56.754627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:42.291 [2024-09-28 16:15:56.858046] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:42.291 [2024-09-28 16:15:56.858124] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.291 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.291 "name": "raid_bdev1", 00:14:42.291 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:42.291 "strip_size_kb": 0, 00:14:42.291 "state": "online", 00:14:42.291 "raid_level": "raid1", 00:14:42.291 "superblock": false, 00:14:42.291 "num_base_bdevs": 4, 00:14:42.291 "num_base_bdevs_discovered": 3, 00:14:42.291 "num_base_bdevs_operational": 3, 00:14:42.291 "process": { 00:14:42.291 "type": "rebuild", 00:14:42.291 "target": "spare", 00:14:42.291 "progress": { 00:14:42.291 "blocks": 14336, 00:14:42.291 "percent": 21 00:14:42.291 } 00:14:42.291 }, 00:14:42.291 "base_bdevs_list": [ 00:14:42.291 { 00:14:42.291 "name": "spare", 00:14:42.291 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:42.291 "is_configured": true, 00:14:42.291 "data_offset": 0, 00:14:42.292 "data_size": 65536 00:14:42.292 }, 00:14:42.292 { 00:14:42.292 "name": null, 00:14:42.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.292 "is_configured": false, 00:14:42.292 "data_offset": 0, 00:14:42.292 "data_size": 65536 00:14:42.292 }, 00:14:42.292 { 00:14:42.292 "name": "BaseBdev3", 00:14:42.292 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:42.292 "is_configured": true, 00:14:42.292 "data_offset": 0, 00:14:42.292 "data_size": 65536 00:14:42.292 }, 00:14:42.292 { 00:14:42.292 "name": "BaseBdev4", 00:14:42.292 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:42.292 "is_configured": true, 00:14:42.292 "data_offset": 0, 00:14:42.292 "data_size": 65536 00:14:42.292 } 00:14:42.292 ] 00:14:42.292 }' 00:14:42.292 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.292 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.292 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.552 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.552 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=489 00:14:42.552 16:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.552 "name": "raid_bdev1", 00:14:42.552 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:42.552 "strip_size_kb": 0, 00:14:42.552 "state": "online", 00:14:42.552 "raid_level": "raid1", 00:14:42.552 "superblock": false, 00:14:42.552 "num_base_bdevs": 4, 00:14:42.552 "num_base_bdevs_discovered": 3, 00:14:42.552 "num_base_bdevs_operational": 3, 00:14:42.552 "process": { 00:14:42.552 "type": "rebuild", 00:14:42.552 "target": "spare", 00:14:42.552 "progress": { 00:14:42.552 "blocks": 16384, 00:14:42.552 "percent": 25 00:14:42.552 } 00:14:42.552 }, 00:14:42.552 "base_bdevs_list": [ 00:14:42.552 { 00:14:42.552 "name": "spare", 00:14:42.552 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:42.552 "is_configured": true, 00:14:42.552 "data_offset": 0, 00:14:42.552 "data_size": 65536 00:14:42.552 }, 00:14:42.552 { 00:14:42.552 "name": null, 00:14:42.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.552 "is_configured": false, 00:14:42.552 "data_offset": 0, 00:14:42.552 "data_size": 65536 00:14:42.552 }, 00:14:42.552 { 00:14:42.552 "name": "BaseBdev3", 00:14:42.552 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:42.552 "is_configured": true, 00:14:42.552 "data_offset": 0, 00:14:42.552 "data_size": 65536 00:14:42.552 }, 00:14:42.552 { 00:14:42.552 "name": "BaseBdev4", 00:14:42.552 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:42.552 "is_configured": true, 00:14:42.552 "data_offset": 0, 00:14:42.552 "data_size": 65536 00:14:42.552 } 00:14:42.552 ] 00:14:42.552 }' 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.552 108.50 IOPS, 325.50 MiB/s 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.552 16:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.552 [2024-09-28 16:15:57.208384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:42.552 [2024-09-28 16:15:57.208920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:42.813 [2024-09-28 16:15:57.413765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:42.813 [2024-09-28 16:15:57.414236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:43.072 [2024-09-28 16:15:57.742099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:43.332 [2024-09-28 16:15:57.859062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:43.591 97.00 IOPS, 291.00 MiB/s 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.591 "name": "raid_bdev1", 00:14:43.591 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:43.591 "strip_size_kb": 0, 00:14:43.591 "state": "online", 00:14:43.591 "raid_level": "raid1", 00:14:43.591 "superblock": false, 00:14:43.591 "num_base_bdevs": 4, 00:14:43.591 "num_base_bdevs_discovered": 3, 00:14:43.591 "num_base_bdevs_operational": 3, 00:14:43.591 "process": { 00:14:43.591 "type": "rebuild", 00:14:43.591 "target": "spare", 00:14:43.591 "progress": { 00:14:43.591 "blocks": 30720, 00:14:43.591 "percent": 46 00:14:43.591 } 00:14:43.591 }, 00:14:43.591 "base_bdevs_list": [ 00:14:43.591 { 00:14:43.591 "name": "spare", 00:14:43.591 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:43.591 "is_configured": true, 00:14:43.591 "data_offset": 0, 00:14:43.591 "data_size": 65536 00:14:43.591 }, 00:14:43.591 { 00:14:43.591 "name": null, 00:14:43.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.591 "is_configured": false, 00:14:43.591 "data_offset": 0, 00:14:43.591 "data_size": 65536 00:14:43.591 }, 00:14:43.591 { 00:14:43.591 "name": "BaseBdev3", 00:14:43.591 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:43.591 "is_configured": true, 00:14:43.591 "data_offset": 0, 00:14:43.591 "data_size": 65536 00:14:43.591 }, 00:14:43.591 { 00:14:43.591 "name": "BaseBdev4", 00:14:43.591 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:43.591 "is_configured": true, 00:14:43.591 "data_offset": 0, 00:14:43.591 "data_size": 65536 00:14:43.591 } 00:14:43.591 ] 00:14:43.591 }' 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.591 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.851 [2024-09-28 16:15:58.294564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:43.851 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.851 16:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.111 [2024-09-28 16:15:58.639389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:44.111 [2024-09-28 16:15:58.640847] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:44.681 [2024-09-28 16:15:59.096180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:44.681 85.00 IOPS, 255.00 MiB/s 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.681 [2024-09-28 16:15:59.320692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.681 16:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.940 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.940 "name": "raid_bdev1", 00:14:44.940 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:44.940 "strip_size_kb": 0, 00:14:44.940 "state": "online", 00:14:44.940 "raid_level": "raid1", 00:14:44.940 "superblock": false, 00:14:44.940 "num_base_bdevs": 4, 00:14:44.940 "num_base_bdevs_discovered": 3, 00:14:44.940 "num_base_bdevs_operational": 3, 00:14:44.940 "process": { 00:14:44.940 "type": "rebuild", 00:14:44.940 "target": "spare", 00:14:44.940 "progress": { 00:14:44.940 "blocks": 47104, 00:14:44.940 "percent": 71 00:14:44.940 } 00:14:44.940 }, 00:14:44.940 "base_bdevs_list": [ 00:14:44.940 { 00:14:44.940 "name": "spare", 00:14:44.940 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:44.940 "is_configured": true, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 }, 00:14:44.940 { 00:14:44.940 "name": null, 00:14:44.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.940 "is_configured": false, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 }, 00:14:44.940 { 00:14:44.940 "name": "BaseBdev3", 00:14:44.940 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:44.940 "is_configured": true, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 }, 00:14:44.940 { 00:14:44.940 "name": "BaseBdev4", 00:14:44.940 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:44.940 "is_configured": true, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 } 00:14:44.940 ] 00:14:44.940 }' 00:14:44.940 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.940 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.940 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.940 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.940 16:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.769 78.00 IOPS, 234.00 MiB/s [2024-09-28 16:16:00.308325] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:45.769 [2024-09-28 16:16:00.408120] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:45.769 [2024-09-28 16:16:00.417452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.769 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.769 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.769 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.769 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.769 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.769 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.028 "name": "raid_bdev1", 00:14:46.028 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:46.028 "strip_size_kb": 0, 00:14:46.028 "state": "online", 00:14:46.028 "raid_level": "raid1", 00:14:46.028 "superblock": false, 00:14:46.028 "num_base_bdevs": 4, 00:14:46.028 "num_base_bdevs_discovered": 3, 00:14:46.028 "num_base_bdevs_operational": 3, 00:14:46.028 "base_bdevs_list": [ 00:14:46.028 { 00:14:46.028 "name": "spare", 00:14:46.028 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:46.028 "is_configured": true, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 }, 00:14:46.028 { 00:14:46.028 "name": null, 00:14:46.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.028 "is_configured": false, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 }, 00:14:46.028 { 00:14:46.028 "name": "BaseBdev3", 00:14:46.028 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:46.028 "is_configured": true, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 }, 00:14:46.028 { 00:14:46.028 "name": "BaseBdev4", 00:14:46.028 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:46.028 "is_configured": true, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 } 00:14:46.028 ] 00:14:46.028 }' 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.028 "name": "raid_bdev1", 00:14:46.028 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:46.028 "strip_size_kb": 0, 00:14:46.028 "state": "online", 00:14:46.028 "raid_level": "raid1", 00:14:46.028 "superblock": false, 00:14:46.028 "num_base_bdevs": 4, 00:14:46.028 "num_base_bdevs_discovered": 3, 00:14:46.028 "num_base_bdevs_operational": 3, 00:14:46.028 "base_bdevs_list": [ 00:14:46.028 { 00:14:46.028 "name": "spare", 00:14:46.028 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:46.028 "is_configured": true, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 }, 00:14:46.028 { 00:14:46.028 "name": null, 00:14:46.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.028 "is_configured": false, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 }, 00:14:46.028 { 00:14:46.028 "name": "BaseBdev3", 00:14:46.028 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:46.028 "is_configured": true, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 }, 00:14:46.028 { 00:14:46.028 "name": "BaseBdev4", 00:14:46.028 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:46.028 "is_configured": true, 00:14:46.028 "data_offset": 0, 00:14:46.028 "data_size": 65536 00:14:46.028 } 00:14:46.028 ] 00:14:46.028 }' 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.028 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.287 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.288 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.288 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.288 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.288 "name": "raid_bdev1", 00:14:46.288 "uuid": "d1f78400-bb5d-4cd6-bdbd-cbec3a433bb2", 00:14:46.288 "strip_size_kb": 0, 00:14:46.288 "state": "online", 00:14:46.288 "raid_level": "raid1", 00:14:46.288 "superblock": false, 00:14:46.288 "num_base_bdevs": 4, 00:14:46.288 "num_base_bdevs_discovered": 3, 00:14:46.288 "num_base_bdevs_operational": 3, 00:14:46.288 "base_bdevs_list": [ 00:14:46.288 { 00:14:46.288 "name": "spare", 00:14:46.288 "uuid": "f9a74f60-d19c-503c-a2cc-fafa2c16e92e", 00:14:46.288 "is_configured": true, 00:14:46.288 "data_offset": 0, 00:14:46.288 "data_size": 65536 00:14:46.288 }, 00:14:46.288 { 00:14:46.288 "name": null, 00:14:46.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.288 "is_configured": false, 00:14:46.288 "data_offset": 0, 00:14:46.288 "data_size": 65536 00:14:46.288 }, 00:14:46.288 { 00:14:46.288 "name": "BaseBdev3", 00:14:46.288 "uuid": "005dfdc8-a8ea-5508-92d2-416a839dd5fa", 00:14:46.288 "is_configured": true, 00:14:46.288 "data_offset": 0, 00:14:46.288 "data_size": 65536 00:14:46.288 }, 00:14:46.288 { 00:14:46.288 "name": "BaseBdev4", 00:14:46.288 "uuid": "87c3afc6-175a-54a5-a2d7-be113e31b079", 00:14:46.288 "is_configured": true, 00:14:46.288 "data_offset": 0, 00:14:46.288 "data_size": 65536 00:14:46.288 } 00:14:46.288 ] 00:14:46.288 }' 00:14:46.288 16:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.288 16:16:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 73.38 IOPS, 220.12 MiB/s 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.547 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.547 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 [2024-09-28 16:16:01.177607] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.547 [2024-09-28 16:16:01.177705] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.807 00:14:46.807 Latency(us) 00:14:46.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.807 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:46.807 raid_bdev1 : 8.16 72.31 216.93 0.00 0.00 19085.10 291.55 119052.30 00:14:46.807 =================================================================================================================== 00:14:46.807 Total : 72.31 216.93 0.00 0.00 19085.10 291.55 119052.30 00:14:46.807 [2024-09-28 16:16:01.297131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.807 { 00:14:46.807 "results": [ 00:14:46.807 { 00:14:46.807 "job": "raid_bdev1", 00:14:46.807 "core_mask": "0x1", 00:14:46.807 "workload": "randrw", 00:14:46.807 "percentage": 50, 00:14:46.807 "status": "finished", 00:14:46.807 "queue_depth": 2, 00:14:46.807 "io_size": 3145728, 00:14:46.807 "runtime": 8.159488, 00:14:46.807 "iops": 72.3084585699495, 00:14:46.807 "mibps": 216.92537570984848, 00:14:46.807 "io_failed": 0, 00:14:46.807 "io_timeout": 0, 00:14:46.807 "avg_latency_us": 19085.096907704836, 00:14:46.807 "min_latency_us": 291.54934497816595, 00:14:46.807 "max_latency_us": 119052.29694323144 00:14:46.807 } 00:14:46.807 ], 00:14:46.807 "core_count": 1 00:14:46.807 } 00:14:46.807 [2024-09-28 16:16:01.297219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.807 [2024-09-28 16:16:01.297362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.807 [2024-09-28 16:16:01.297377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.807 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:47.066 /dev/nbd0 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.066 1+0 records in 00:14:47.066 1+0 records out 00:14:47.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543674 s, 7.5 MB/s 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.066 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:47.326 /dev/nbd1 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.326 1+0 records in 00:14:47.326 1+0 records out 00:14:47.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606538 s, 6.8 MB/s 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:47.326 16:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:47.326 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.326 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:47.326 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.326 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:47.326 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.326 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.584 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:47.843 /dev/nbd1 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.843 1+0 records in 00:14:47.843 1+0 records out 00:14:47.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360002 s, 11.4 MB/s 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.843 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.102 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.361 16:16:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78763 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78763 ']' 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78763 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.361 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78763 00:14:48.620 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.620 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.620 killing process with pid 78763 00:14:48.620 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78763' 00:14:48.620 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78763 00:14:48.620 Received shutdown signal, test time was about 9.932431 seconds 00:14:48.620 00:14:48.620 Latency(us) 00:14:48.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.620 =================================================================================================================== 00:14:48.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.620 [2024-09-28 16:16:03.046888] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.620 16:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78763 00:14:48.879 [2024-09-28 16:16:03.440111] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:50.256 00:14:50.256 real 0m13.442s 00:14:50.256 user 0m16.637s 00:14:50.256 sys 0m2.083s 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.256 ************************************ 00:14:50.256 END TEST raid_rebuild_test_io 00:14:50.256 ************************************ 00:14:50.256 16:16:04 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:50.256 16:16:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:50.256 16:16:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.256 16:16:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.256 ************************************ 00:14:50.256 START TEST raid_rebuild_test_sb_io 00:14:50.256 ************************************ 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.256 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79172 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79172 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79172 ']' 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.257 16:16:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.257 [2024-09-28 16:16:04.872492] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:50.257 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.257 Zero copy mechanism will not be used. 00:14:50.257 [2024-09-28 16:16:04.873137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79172 ] 00:14:50.516 [2024-09-28 16:16:05.040392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.775 [2024-09-28 16:16:05.236364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.775 [2024-09-28 16:16:05.418823] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.775 [2024-09-28 16:16:05.418864] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.034 BaseBdev1_malloc 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.034 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.293 [2024-09-28 16:16:05.721313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.293 [2024-09-28 16:16:05.721401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.293 [2024-09-28 16:16:05.721424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.293 [2024-09-28 16:16:05.721439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.293 [2024-09-28 16:16:05.723461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.293 [2024-09-28 16:16:05.723503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.293 BaseBdev1 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.293 BaseBdev2_malloc 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.293 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.293 [2024-09-28 16:16:05.805764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:51.293 [2024-09-28 16:16:05.805821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.293 [2024-09-28 16:16:05.805841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:51.293 [2024-09-28 16:16:05.805851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.293 [2024-09-28 16:16:05.807700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.293 [2024-09-28 16:16:05.807742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.294 BaseBdev2 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.294 BaseBdev3_malloc 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.294 [2024-09-28 16:16:05.858761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:51.294 [2024-09-28 16:16:05.858810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.294 [2024-09-28 16:16:05.858830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:51.294 [2024-09-28 16:16:05.858841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.294 [2024-09-28 16:16:05.860686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.294 [2024-09-28 16:16:05.860726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:51.294 BaseBdev3 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.294 BaseBdev4_malloc 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.294 [2024-09-28 16:16:05.911832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:51.294 [2024-09-28 16:16:05.911883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.294 [2024-09-28 16:16:05.911900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:51.294 [2024-09-28 16:16:05.911910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.294 [2024-09-28 16:16:05.913896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.294 [2024-09-28 16:16:05.913935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:51.294 BaseBdev4 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.294 spare_malloc 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.294 spare_delay 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.294 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.555 [2024-09-28 16:16:05.977359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.555 [2024-09-28 16:16:05.977430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.555 [2024-09-28 16:16:05.977449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:51.555 [2024-09-28 16:16:05.977460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.555 [2024-09-28 16:16:05.979459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.555 [2024-09-28 16:16:05.979501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.555 spare 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.555 [2024-09-28 16:16:05.989396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.555 [2024-09-28 16:16:05.990998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.555 [2024-09-28 16:16:05.991093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.555 [2024-09-28 16:16:05.991148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:51.555 [2024-09-28 16:16:05.991329] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:51.555 [2024-09-28 16:16:05.991359] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:51.555 [2024-09-28 16:16:05.991605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:51.555 [2024-09-28 16:16:05.991769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:51.555 [2024-09-28 16:16:05.991779] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:51.555 [2024-09-28 16:16:05.991923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.555 16:16:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.555 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.555 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.555 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.555 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.555 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.555 "name": "raid_bdev1", 00:14:51.555 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:51.555 "strip_size_kb": 0, 00:14:51.555 "state": "online", 00:14:51.555 "raid_level": "raid1", 00:14:51.555 "superblock": true, 00:14:51.555 "num_base_bdevs": 4, 00:14:51.555 "num_base_bdevs_discovered": 4, 00:14:51.555 "num_base_bdevs_operational": 4, 00:14:51.555 "base_bdevs_list": [ 00:14:51.555 { 00:14:51.555 "name": "BaseBdev1", 00:14:51.555 "uuid": "192e9c24-9712-51e5-8be5-49485dbd3832", 00:14:51.555 "is_configured": true, 00:14:51.555 "data_offset": 2048, 00:14:51.555 "data_size": 63488 00:14:51.555 }, 00:14:51.555 { 00:14:51.555 "name": "BaseBdev2", 00:14:51.555 "uuid": "656c8523-221a-5134-a7ae-07160c0226f7", 00:14:51.555 "is_configured": true, 00:14:51.555 "data_offset": 2048, 00:14:51.555 "data_size": 63488 00:14:51.555 }, 00:14:51.555 { 00:14:51.555 "name": "BaseBdev3", 00:14:51.555 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:51.555 "is_configured": true, 00:14:51.555 "data_offset": 2048, 00:14:51.555 "data_size": 63488 00:14:51.555 }, 00:14:51.555 { 00:14:51.555 "name": "BaseBdev4", 00:14:51.555 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:51.555 "is_configured": true, 00:14:51.555 "data_offset": 2048, 00:14:51.555 "data_size": 63488 00:14:51.555 } 00:14:51.555 ] 00:14:51.555 }' 00:14:51.555 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.555 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.814 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:51.814 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:51.814 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.814 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.814 [2024-09-28 16:16:06.480790] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.074 [2024-09-28 16:16:06.576314] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.074 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.074 "name": "raid_bdev1", 00:14:52.074 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:52.074 "strip_size_kb": 0, 00:14:52.074 "state": "online", 00:14:52.074 "raid_level": "raid1", 00:14:52.074 "superblock": true, 00:14:52.074 "num_base_bdevs": 4, 00:14:52.074 "num_base_bdevs_discovered": 3, 00:14:52.074 "num_base_bdevs_operational": 3, 00:14:52.074 "base_bdevs_list": [ 00:14:52.074 { 00:14:52.074 "name": null, 00:14:52.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.075 "is_configured": false, 00:14:52.075 "data_offset": 0, 00:14:52.075 "data_size": 63488 00:14:52.075 }, 00:14:52.075 { 00:14:52.075 "name": "BaseBdev2", 00:14:52.075 "uuid": "656c8523-221a-5134-a7ae-07160c0226f7", 00:14:52.075 "is_configured": true, 00:14:52.075 "data_offset": 2048, 00:14:52.075 "data_size": 63488 00:14:52.075 }, 00:14:52.075 { 00:14:52.075 "name": "BaseBdev3", 00:14:52.075 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:52.075 "is_configured": true, 00:14:52.075 "data_offset": 2048, 00:14:52.075 "data_size": 63488 00:14:52.075 }, 00:14:52.075 { 00:14:52.075 "name": "BaseBdev4", 00:14:52.075 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:52.075 "is_configured": true, 00:14:52.075 "data_offset": 2048, 00:14:52.075 "data_size": 63488 00:14:52.075 } 00:14:52.075 ] 00:14:52.075 }' 00:14:52.075 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.075 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.075 [2024-09-28 16:16:06.674989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:52.075 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:52.075 Zero copy mechanism will not be used. 00:14:52.075 Running I/O for 60 seconds... 00:14:52.334 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.334 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.334 16:16:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.334 [2024-09-28 16:16:06.971448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.334 16:16:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.334 16:16:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:52.593 [2024-09-28 16:16:07.027906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:52.593 [2024-09-28 16:16:07.029798] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.593 [2024-09-28 16:16:07.156137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.593 [2024-09-28 16:16:07.157391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.853 [2024-09-28 16:16:07.388960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:52.853 [2024-09-28 16:16:07.389763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.372 170.00 IOPS, 510.00 MiB/s 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.372 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.632 "name": "raid_bdev1", 00:14:53.632 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:53.632 "strip_size_kb": 0, 00:14:53.632 "state": "online", 00:14:53.632 "raid_level": "raid1", 00:14:53.632 "superblock": true, 00:14:53.632 "num_base_bdevs": 4, 00:14:53.632 "num_base_bdevs_discovered": 4, 00:14:53.632 "num_base_bdevs_operational": 4, 00:14:53.632 "process": { 00:14:53.632 "type": "rebuild", 00:14:53.632 "target": "spare", 00:14:53.632 "progress": { 00:14:53.632 "blocks": 12288, 00:14:53.632 "percent": 19 00:14:53.632 } 00:14:53.632 }, 00:14:53.632 "base_bdevs_list": [ 00:14:53.632 { 00:14:53.632 "name": "spare", 00:14:53.632 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:53.632 "is_configured": true, 00:14:53.632 "data_offset": 2048, 00:14:53.632 "data_size": 63488 00:14:53.632 }, 00:14:53.632 { 00:14:53.632 "name": "BaseBdev2", 00:14:53.632 "uuid": "656c8523-221a-5134-a7ae-07160c0226f7", 00:14:53.632 "is_configured": true, 00:14:53.632 "data_offset": 2048, 00:14:53.632 "data_size": 63488 00:14:53.632 }, 00:14:53.632 { 00:14:53.632 "name": "BaseBdev3", 00:14:53.632 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:53.632 "is_configured": true, 00:14:53.632 "data_offset": 2048, 00:14:53.632 "data_size": 63488 00:14:53.632 }, 00:14:53.632 { 00:14:53.632 "name": "BaseBdev4", 00:14:53.632 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:53.632 "is_configured": true, 00:14:53.632 "data_offset": 2048, 00:14:53.632 "data_size": 63488 00:14:53.632 } 00:14:53.632 ] 00:14:53.632 }' 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.632 [2024-09-28 16:16:08.132096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.632 [2024-09-28 16:16:08.154588] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.632 [2024-09-28 16:16:08.255195] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.632 [2024-09-28 16:16:08.263508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.632 [2024-09-28 16:16:08.263598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.632 [2024-09-28 16:16:08.263625] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.632 [2024-09-28 16:16:08.284985] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.632 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.892 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.892 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.892 "name": "raid_bdev1", 00:14:53.892 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:53.892 "strip_size_kb": 0, 00:14:53.892 "state": "online", 00:14:53.892 "raid_level": "raid1", 00:14:53.892 "superblock": true, 00:14:53.892 "num_base_bdevs": 4, 00:14:53.892 "num_base_bdevs_discovered": 3, 00:14:53.892 "num_base_bdevs_operational": 3, 00:14:53.892 "base_bdevs_list": [ 00:14:53.892 { 00:14:53.892 "name": null, 00:14:53.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.892 "is_configured": false, 00:14:53.892 "data_offset": 0, 00:14:53.892 "data_size": 63488 00:14:53.892 }, 00:14:53.892 { 00:14:53.892 "name": "BaseBdev2", 00:14:53.892 "uuid": "656c8523-221a-5134-a7ae-07160c0226f7", 00:14:53.892 "is_configured": true, 00:14:53.892 "data_offset": 2048, 00:14:53.892 "data_size": 63488 00:14:53.892 }, 00:14:53.892 { 00:14:53.892 "name": "BaseBdev3", 00:14:53.892 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:53.892 "is_configured": true, 00:14:53.892 "data_offset": 2048, 00:14:53.892 "data_size": 63488 00:14:53.892 }, 00:14:53.892 { 00:14:53.892 "name": "BaseBdev4", 00:14:53.892 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:53.892 "is_configured": true, 00:14:53.892 "data_offset": 2048, 00:14:53.892 "data_size": 63488 00:14:53.892 } 00:14:53.892 ] 00:14:53.892 }' 00:14:53.892 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.892 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.151 166.00 IOPS, 498.00 MiB/s 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.151 "name": "raid_bdev1", 00:14:54.151 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:54.151 "strip_size_kb": 0, 00:14:54.151 "state": "online", 00:14:54.151 "raid_level": "raid1", 00:14:54.151 "superblock": true, 00:14:54.151 "num_base_bdevs": 4, 00:14:54.151 "num_base_bdevs_discovered": 3, 00:14:54.151 "num_base_bdevs_operational": 3, 00:14:54.151 "base_bdevs_list": [ 00:14:54.151 { 00:14:54.151 "name": null, 00:14:54.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.151 "is_configured": false, 00:14:54.151 "data_offset": 0, 00:14:54.151 "data_size": 63488 00:14:54.151 }, 00:14:54.151 { 00:14:54.151 "name": "BaseBdev2", 00:14:54.151 "uuid": "656c8523-221a-5134-a7ae-07160c0226f7", 00:14:54.151 "is_configured": true, 00:14:54.151 "data_offset": 2048, 00:14:54.151 "data_size": 63488 00:14:54.151 }, 00:14:54.151 { 00:14:54.151 "name": "BaseBdev3", 00:14:54.151 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:54.151 "is_configured": true, 00:14:54.151 "data_offset": 2048, 00:14:54.151 "data_size": 63488 00:14:54.151 }, 00:14:54.151 { 00:14:54.151 "name": "BaseBdev4", 00:14:54.151 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:54.151 "is_configured": true, 00:14:54.151 "data_offset": 2048, 00:14:54.151 "data_size": 63488 00:14:54.151 } 00:14:54.151 ] 00:14:54.151 }' 00:14:54.151 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.410 [2024-09-28 16:16:08.934337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.410 16:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:54.410 [2024-09-28 16:16:08.983821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:54.410 [2024-09-28 16:16:08.985678] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.410 [2024-09-28 16:16:09.093621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.410 [2024-09-28 16:16:09.094139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.670 [2024-09-28 16:16:09.304491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:54.670 [2024-09-28 16:16:09.305267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:55.239 [2024-09-28 16:16:09.638484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:55.239 170.33 IOPS, 511.00 MiB/s [2024-09-28 16:16:09.755009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:55.239 [2024-09-28 16:16:09.755802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.498 16:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.498 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.498 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.498 "name": "raid_bdev1", 00:14:55.498 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:55.498 "strip_size_kb": 0, 00:14:55.498 "state": "online", 00:14:55.498 "raid_level": "raid1", 00:14:55.498 "superblock": true, 00:14:55.498 "num_base_bdevs": 4, 00:14:55.498 "num_base_bdevs_discovered": 4, 00:14:55.498 "num_base_bdevs_operational": 4, 00:14:55.498 "process": { 00:14:55.498 "type": "rebuild", 00:14:55.498 "target": "spare", 00:14:55.498 "progress": { 00:14:55.498 "blocks": 12288, 00:14:55.498 "percent": 19 00:14:55.499 } 00:14:55.499 }, 00:14:55.499 "base_bdevs_list": [ 00:14:55.499 { 00:14:55.499 "name": "spare", 00:14:55.499 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:55.499 "is_configured": true, 00:14:55.499 "data_offset": 2048, 00:14:55.499 "data_size": 63488 00:14:55.499 }, 00:14:55.499 { 00:14:55.499 "name": "BaseBdev2", 00:14:55.499 "uuid": "656c8523-221a-5134-a7ae-07160c0226f7", 00:14:55.499 "is_configured": true, 00:14:55.499 "data_offset": 2048, 00:14:55.499 "data_size": 63488 00:14:55.499 }, 00:14:55.499 { 00:14:55.499 "name": "BaseBdev3", 00:14:55.499 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:55.499 "is_configured": true, 00:14:55.499 "data_offset": 2048, 00:14:55.499 "data_size": 63488 00:14:55.499 }, 00:14:55.499 { 00:14:55.499 "name": "BaseBdev4", 00:14:55.499 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:55.499 "is_configured": true, 00:14:55.499 "data_offset": 2048, 00:14:55.499 "data_size": 63488 00:14:55.499 } 00:14:55.499 ] 00:14:55.499 }' 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:55.499 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.499 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 [2024-09-28 16:16:10.136420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.759 [2024-09-28 16:16:10.184887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:55.759 [2024-09-28 16:16:10.185303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:55.759 [2024-09-28 16:16:10.296739] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:55.759 [2024-09-28 16:16:10.296806] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:55.759 [2024-09-28 16:16:10.310343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.759 [2024-09-28 16:16:10.316002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.759 "name": "raid_bdev1", 00:14:55.759 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:55.759 "strip_size_kb": 0, 00:14:55.759 "state": "online", 00:14:55.759 "raid_level": "raid1", 00:14:55.759 "superblock": true, 00:14:55.759 "num_base_bdevs": 4, 00:14:55.759 "num_base_bdevs_discovered": 3, 00:14:55.759 "num_base_bdevs_operational": 3, 00:14:55.759 "process": { 00:14:55.759 "type": "rebuild", 00:14:55.759 "target": "spare", 00:14:55.759 "progress": { 00:14:55.759 "blocks": 16384, 00:14:55.759 "percent": 25 00:14:55.759 } 00:14:55.759 }, 00:14:55.759 "base_bdevs_list": [ 00:14:55.759 { 00:14:55.759 "name": "spare", 00:14:55.759 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:55.759 "is_configured": true, 00:14:55.759 "data_offset": 2048, 00:14:55.759 "data_size": 63488 00:14:55.759 }, 00:14:55.759 { 00:14:55.759 "name": null, 00:14:55.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.759 "is_configured": false, 00:14:55.759 "data_offset": 0, 00:14:55.759 "data_size": 63488 00:14:55.759 }, 00:14:55.759 { 00:14:55.759 "name": "BaseBdev3", 00:14:55.759 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:55.759 "is_configured": true, 00:14:55.759 "data_offset": 2048, 00:14:55.759 "data_size": 63488 00:14:55.759 }, 00:14:55.759 { 00:14:55.759 "name": "BaseBdev4", 00:14:55.759 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:55.759 "is_configured": true, 00:14:55.759 "data_offset": 2048, 00:14:55.759 "data_size": 63488 00:14:55.759 } 00:14:55.759 ] 00:14:55.759 }' 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.759 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=503 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.019 "name": "raid_bdev1", 00:14:56.019 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:56.019 "strip_size_kb": 0, 00:14:56.019 "state": "online", 00:14:56.019 "raid_level": "raid1", 00:14:56.019 "superblock": true, 00:14:56.019 "num_base_bdevs": 4, 00:14:56.019 "num_base_bdevs_discovered": 3, 00:14:56.019 "num_base_bdevs_operational": 3, 00:14:56.019 "process": { 00:14:56.019 "type": "rebuild", 00:14:56.019 "target": "spare", 00:14:56.019 "progress": { 00:14:56.019 "blocks": 16384, 00:14:56.019 "percent": 25 00:14:56.019 } 00:14:56.019 }, 00:14:56.019 "base_bdevs_list": [ 00:14:56.019 { 00:14:56.019 "name": "spare", 00:14:56.019 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:56.019 "is_configured": true, 00:14:56.019 "data_offset": 2048, 00:14:56.019 "data_size": 63488 00:14:56.019 }, 00:14:56.019 { 00:14:56.019 "name": null, 00:14:56.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.019 "is_configured": false, 00:14:56.019 "data_offset": 0, 00:14:56.019 "data_size": 63488 00:14:56.019 }, 00:14:56.019 { 00:14:56.019 "name": "BaseBdev3", 00:14:56.019 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:56.019 "is_configured": true, 00:14:56.019 "data_offset": 2048, 00:14:56.019 "data_size": 63488 00:14:56.019 }, 00:14:56.019 { 00:14:56.019 "name": "BaseBdev4", 00:14:56.019 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:56.019 "is_configured": true, 00:14:56.019 "data_offset": 2048, 00:14:56.019 "data_size": 63488 00:14:56.019 } 00:14:56.019 ] 00:14:56.019 }' 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.019 16:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.588 148.00 IOPS, 444.00 MiB/s [2024-09-28 16:16:11.025163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.157 "name": "raid_bdev1", 00:14:57.157 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:57.157 "strip_size_kb": 0, 00:14:57.157 "state": "online", 00:14:57.157 "raid_level": "raid1", 00:14:57.157 "superblock": true, 00:14:57.157 "num_base_bdevs": 4, 00:14:57.157 "num_base_bdevs_discovered": 3, 00:14:57.157 "num_base_bdevs_operational": 3, 00:14:57.157 "process": { 00:14:57.157 "type": "rebuild", 00:14:57.157 "target": "spare", 00:14:57.157 "progress": { 00:14:57.157 "blocks": 32768, 00:14:57.157 "percent": 51 00:14:57.157 } 00:14:57.157 }, 00:14:57.157 "base_bdevs_list": [ 00:14:57.157 { 00:14:57.157 "name": "spare", 00:14:57.157 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:57.157 "is_configured": true, 00:14:57.157 "data_offset": 2048, 00:14:57.157 "data_size": 63488 00:14:57.157 }, 00:14:57.157 { 00:14:57.157 "name": null, 00:14:57.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.157 "is_configured": false, 00:14:57.157 "data_offset": 0, 00:14:57.157 "data_size": 63488 00:14:57.157 }, 00:14:57.157 { 00:14:57.157 "name": "BaseBdev3", 00:14:57.157 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:57.157 "is_configured": true, 00:14:57.157 "data_offset": 2048, 00:14:57.157 "data_size": 63488 00:14:57.157 }, 00:14:57.157 { 00:14:57.157 "name": "BaseBdev4", 00:14:57.157 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:57.157 "is_configured": true, 00:14:57.157 "data_offset": 2048, 00:14:57.157 "data_size": 63488 00:14:57.157 } 00:14:57.157 ] 00:14:57.157 }' 00:14:57.157 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.158 131.60 IOPS, 394.80 MiB/s 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.158 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.158 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.158 16:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.158 [2024-09-28 16:16:11.831313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:57.158 [2024-09-28 16:16:11.831899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:57.727 [2024-09-28 16:16:12.161324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:57.986 [2024-09-28 16:16:12.601007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:58.246 117.17 IOPS, 351.50 MiB/s 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.246 "name": "raid_bdev1", 00:14:58.246 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:58.246 "strip_size_kb": 0, 00:14:58.246 "state": "online", 00:14:58.246 "raid_level": "raid1", 00:14:58.246 "superblock": true, 00:14:58.246 "num_base_bdevs": 4, 00:14:58.246 "num_base_bdevs_discovered": 3, 00:14:58.246 "num_base_bdevs_operational": 3, 00:14:58.246 "process": { 00:14:58.246 "type": "rebuild", 00:14:58.246 "target": "spare", 00:14:58.246 "progress": { 00:14:58.246 "blocks": 53248, 00:14:58.246 "percent": 83 00:14:58.246 } 00:14:58.246 }, 00:14:58.246 "base_bdevs_list": [ 00:14:58.246 { 00:14:58.246 "name": "spare", 00:14:58.246 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:58.246 "is_configured": true, 00:14:58.246 "data_offset": 2048, 00:14:58.246 "data_size": 63488 00:14:58.246 }, 00:14:58.246 { 00:14:58.246 "name": null, 00:14:58.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.246 "is_configured": false, 00:14:58.246 "data_offset": 0, 00:14:58.246 "data_size": 63488 00:14:58.246 }, 00:14:58.246 { 00:14:58.246 "name": "BaseBdev3", 00:14:58.246 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:58.246 "is_configured": true, 00:14:58.246 "data_offset": 2048, 00:14:58.246 "data_size": 63488 00:14:58.246 }, 00:14:58.246 { 00:14:58.246 "name": "BaseBdev4", 00:14:58.246 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:58.246 "is_configured": true, 00:14:58.246 "data_offset": 2048, 00:14:58.246 "data_size": 63488 00:14:58.246 } 00:14:58.246 ] 00:14:58.246 }' 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.246 16:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.506 [2024-09-28 16:16:13.029371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:58.765 [2024-09-28 16:16:13.258257] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:58.765 [2024-09-28 16:16:13.363201] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:58.765 [2024-09-28 16:16:13.365596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.283 105.14 IOPS, 315.43 MiB/s 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.284 "name": "raid_bdev1", 00:14:59.284 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:59.284 "strip_size_kb": 0, 00:14:59.284 "state": "online", 00:14:59.284 "raid_level": "raid1", 00:14:59.284 "superblock": true, 00:14:59.284 "num_base_bdevs": 4, 00:14:59.284 "num_base_bdevs_discovered": 3, 00:14:59.284 "num_base_bdevs_operational": 3, 00:14:59.284 "base_bdevs_list": [ 00:14:59.284 { 00:14:59.284 "name": "spare", 00:14:59.284 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:59.284 "is_configured": true, 00:14:59.284 "data_offset": 2048, 00:14:59.284 "data_size": 63488 00:14:59.284 }, 00:14:59.284 { 00:14:59.284 "name": null, 00:14:59.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.284 "is_configured": false, 00:14:59.284 "data_offset": 0, 00:14:59.284 "data_size": 63488 00:14:59.284 }, 00:14:59.284 { 00:14:59.284 "name": "BaseBdev3", 00:14:59.284 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:59.284 "is_configured": true, 00:14:59.284 "data_offset": 2048, 00:14:59.284 "data_size": 63488 00:14:59.284 }, 00:14:59.284 { 00:14:59.284 "name": "BaseBdev4", 00:14:59.284 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:59.284 "is_configured": true, 00:14:59.284 "data_offset": 2048, 00:14:59.284 "data_size": 63488 00:14:59.284 } 00:14:59.284 ] 00:14:59.284 }' 00:14:59.284 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.544 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:59.544 16:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.544 "name": "raid_bdev1", 00:14:59.544 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:59.544 "strip_size_kb": 0, 00:14:59.544 "state": "online", 00:14:59.544 "raid_level": "raid1", 00:14:59.544 "superblock": true, 00:14:59.544 "num_base_bdevs": 4, 00:14:59.544 "num_base_bdevs_discovered": 3, 00:14:59.544 "num_base_bdevs_operational": 3, 00:14:59.544 "base_bdevs_list": [ 00:14:59.544 { 00:14:59.544 "name": "spare", 00:14:59.544 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:59.544 "is_configured": true, 00:14:59.544 "data_offset": 2048, 00:14:59.544 "data_size": 63488 00:14:59.544 }, 00:14:59.544 { 00:14:59.544 "name": null, 00:14:59.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.544 "is_configured": false, 00:14:59.544 "data_offset": 0, 00:14:59.544 "data_size": 63488 00:14:59.544 }, 00:14:59.544 { 00:14:59.544 "name": "BaseBdev3", 00:14:59.544 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:59.544 "is_configured": true, 00:14:59.544 "data_offset": 2048, 00:14:59.544 "data_size": 63488 00:14:59.544 }, 00:14:59.544 { 00:14:59.544 "name": "BaseBdev4", 00:14:59.544 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:59.544 "is_configured": true, 00:14:59.544 "data_offset": 2048, 00:14:59.544 "data_size": 63488 00:14:59.544 } 00:14:59.544 ] 00:14:59.544 }' 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.544 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.544 "name": "raid_bdev1", 00:14:59.544 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:14:59.544 "strip_size_kb": 0, 00:14:59.544 "state": "online", 00:14:59.544 "raid_level": "raid1", 00:14:59.544 "superblock": true, 00:14:59.544 "num_base_bdevs": 4, 00:14:59.544 "num_base_bdevs_discovered": 3, 00:14:59.544 "num_base_bdevs_operational": 3, 00:14:59.544 "base_bdevs_list": [ 00:14:59.544 { 00:14:59.544 "name": "spare", 00:14:59.544 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:14:59.544 "is_configured": true, 00:14:59.544 "data_offset": 2048, 00:14:59.544 "data_size": 63488 00:14:59.544 }, 00:14:59.544 { 00:14:59.545 "name": null, 00:14:59.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.545 "is_configured": false, 00:14:59.545 "data_offset": 0, 00:14:59.545 "data_size": 63488 00:14:59.545 }, 00:14:59.545 { 00:14:59.545 "name": "BaseBdev3", 00:14:59.545 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:14:59.545 "is_configured": true, 00:14:59.545 "data_offset": 2048, 00:14:59.545 "data_size": 63488 00:14:59.545 }, 00:14:59.545 { 00:14:59.545 "name": "BaseBdev4", 00:14:59.545 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:14:59.545 "is_configured": true, 00:14:59.545 "data_offset": 2048, 00:14:59.545 "data_size": 63488 00:14:59.545 } 00:14:59.545 ] 00:14:59.545 }' 00:14:59.545 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.545 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.114 [2024-09-28 16:16:14.616764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.114 [2024-09-28 16:16:14.616798] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.114 97.25 IOPS, 291.75 MiB/s 00:15:00.114 Latency(us) 00:15:00.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.114 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:00.114 raid_bdev1 : 8.01 97.29 291.87 0.00 0.00 14928.24 325.53 115847.04 00:15:00.114 =================================================================================================================== 00:15:00.114 Total : 97.29 291.87 0.00 0.00 14928.24 325.53 115847.04 00:15:00.114 [2024-09-28 16:16:14.688148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.114 [2024-09-28 16:16:14.688252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.114 [2024-09-28 16:16:14.688364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.114 [2024-09-28 16:16:14.688416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:00.114 { 00:15:00.114 "results": [ 00:15:00.114 { 00:15:00.114 "job": "raid_bdev1", 00:15:00.114 "core_mask": "0x1", 00:15:00.114 "workload": "randrw", 00:15:00.114 "percentage": 50, 00:15:00.114 "status": "finished", 00:15:00.114 "queue_depth": 2, 00:15:00.114 "io_size": 3145728, 00:15:00.114 "runtime": 8.006926, 00:15:00.114 "iops": 97.29077051542627, 00:15:00.114 "mibps": 291.8723115462788, 00:15:00.114 "io_failed": 0, 00:15:00.114 "io_timeout": 0, 00:15:00.114 "avg_latency_us": 14928.24168035383, 00:15:00.114 "min_latency_us": 325.5336244541485, 00:15:00.114 "max_latency_us": 115847.04279475982 00:15:00.114 } 00:15:00.114 ], 00:15:00.114 "core_count": 1 00:15:00.114 } 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.114 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:00.374 /dev/nbd0 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.374 1+0 records in 00:15:00.374 1+0 records out 00:15:00.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418781 s, 9.8 MB/s 00:15:00.374 16:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.374 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:00.633 /dev/nbd1 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.633 1+0 records in 00:15:00.633 1+0 records out 00:15:00.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450811 s, 9.1 MB/s 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.633 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:00.892 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:00.892 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.892 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:00.892 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.892 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:00.892 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.892 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.152 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:01.410 /dev/nbd1 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.410 1+0 records in 00:15:01.410 1+0 records out 00:15:01.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557181 s, 7.4 MB/s 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.410 16:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.669 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.928 [2024-09-28 16:16:16.457116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.928 [2024-09-28 16:16:16.457173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.928 [2024-09-28 16:16:16.457195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:01.928 [2024-09-28 16:16:16.457205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.928 [2024-09-28 16:16:16.459364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.928 [2024-09-28 16:16:16.459403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.928 [2024-09-28 16:16:16.459486] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:01.928 [2024-09-28 16:16:16.459535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.928 [2024-09-28 16:16:16.459680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.928 [2024-09-28 16:16:16.459783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:01.928 spare 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.928 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 [2024-09-28 16:16:16.559676] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:01.929 [2024-09-28 16:16:16.559703] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:01.929 [2024-09-28 16:16:16.559939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:01.929 [2024-09-28 16:16:16.560099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:01.929 [2024-09-28 16:16:16.560108] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:01.929 [2024-09-28 16:16:16.560276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.188 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.188 "name": "raid_bdev1", 00:15:02.188 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:02.188 "strip_size_kb": 0, 00:15:02.188 "state": "online", 00:15:02.188 "raid_level": "raid1", 00:15:02.188 "superblock": true, 00:15:02.188 "num_base_bdevs": 4, 00:15:02.188 "num_base_bdevs_discovered": 3, 00:15:02.188 "num_base_bdevs_operational": 3, 00:15:02.188 "base_bdevs_list": [ 00:15:02.188 { 00:15:02.188 "name": "spare", 00:15:02.188 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:15:02.188 "is_configured": true, 00:15:02.188 "data_offset": 2048, 00:15:02.188 "data_size": 63488 00:15:02.188 }, 00:15:02.188 { 00:15:02.188 "name": null, 00:15:02.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.188 "is_configured": false, 00:15:02.188 "data_offset": 2048, 00:15:02.188 "data_size": 63488 00:15:02.188 }, 00:15:02.188 { 00:15:02.188 "name": "BaseBdev3", 00:15:02.188 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:02.188 "is_configured": true, 00:15:02.188 "data_offset": 2048, 00:15:02.188 "data_size": 63488 00:15:02.188 }, 00:15:02.188 { 00:15:02.188 "name": "BaseBdev4", 00:15:02.188 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:02.188 "is_configured": true, 00:15:02.188 "data_offset": 2048, 00:15:02.188 "data_size": 63488 00:15:02.188 } 00:15:02.188 ] 00:15:02.188 }' 00:15:02.188 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.188 16:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.447 "name": "raid_bdev1", 00:15:02.447 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:02.447 "strip_size_kb": 0, 00:15:02.447 "state": "online", 00:15:02.447 "raid_level": "raid1", 00:15:02.447 "superblock": true, 00:15:02.447 "num_base_bdevs": 4, 00:15:02.447 "num_base_bdevs_discovered": 3, 00:15:02.447 "num_base_bdevs_operational": 3, 00:15:02.447 "base_bdevs_list": [ 00:15:02.447 { 00:15:02.447 "name": "spare", 00:15:02.447 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:15:02.447 "is_configured": true, 00:15:02.447 "data_offset": 2048, 00:15:02.447 "data_size": 63488 00:15:02.447 }, 00:15:02.447 { 00:15:02.447 "name": null, 00:15:02.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.447 "is_configured": false, 00:15:02.447 "data_offset": 2048, 00:15:02.447 "data_size": 63488 00:15:02.447 }, 00:15:02.447 { 00:15:02.447 "name": "BaseBdev3", 00:15:02.447 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:02.447 "is_configured": true, 00:15:02.447 "data_offset": 2048, 00:15:02.447 "data_size": 63488 00:15:02.447 }, 00:15:02.447 { 00:15:02.447 "name": "BaseBdev4", 00:15:02.447 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:02.447 "is_configured": true, 00:15:02.447 "data_offset": 2048, 00:15:02.447 "data_size": 63488 00:15:02.447 } 00:15:02.447 ] 00:15:02.447 }' 00:15:02.447 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.707 [2024-09-28 16:16:17.271853] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.707 "name": "raid_bdev1", 00:15:02.707 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:02.707 "strip_size_kb": 0, 00:15:02.707 "state": "online", 00:15:02.707 "raid_level": "raid1", 00:15:02.707 "superblock": true, 00:15:02.707 "num_base_bdevs": 4, 00:15:02.707 "num_base_bdevs_discovered": 2, 00:15:02.707 "num_base_bdevs_operational": 2, 00:15:02.707 "base_bdevs_list": [ 00:15:02.707 { 00:15:02.707 "name": null, 00:15:02.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.707 "is_configured": false, 00:15:02.707 "data_offset": 0, 00:15:02.707 "data_size": 63488 00:15:02.707 }, 00:15:02.707 { 00:15:02.707 "name": null, 00:15:02.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.707 "is_configured": false, 00:15:02.707 "data_offset": 2048, 00:15:02.707 "data_size": 63488 00:15:02.707 }, 00:15:02.707 { 00:15:02.707 "name": "BaseBdev3", 00:15:02.707 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:02.707 "is_configured": true, 00:15:02.707 "data_offset": 2048, 00:15:02.707 "data_size": 63488 00:15:02.707 }, 00:15:02.707 { 00:15:02.707 "name": "BaseBdev4", 00:15:02.707 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:02.707 "is_configured": true, 00:15:02.707 "data_offset": 2048, 00:15:02.707 "data_size": 63488 00:15:02.707 } 00:15:02.707 ] 00:15:02.707 }' 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.707 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.276 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.276 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.276 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.276 [2024-09-28 16:16:17.719191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.276 [2024-09-28 16:16:17.719363] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:03.276 [2024-09-28 16:16:17.719383] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:03.276 [2024-09-28 16:16:17.719415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.276 [2024-09-28 16:16:17.732544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:03.276 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.276 16:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:03.276 [2024-09-28 16:16:17.734270] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.215 "name": "raid_bdev1", 00:15:04.215 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:04.215 "strip_size_kb": 0, 00:15:04.215 "state": "online", 00:15:04.215 "raid_level": "raid1", 00:15:04.215 "superblock": true, 00:15:04.215 "num_base_bdevs": 4, 00:15:04.215 "num_base_bdevs_discovered": 3, 00:15:04.215 "num_base_bdevs_operational": 3, 00:15:04.215 "process": { 00:15:04.215 "type": "rebuild", 00:15:04.215 "target": "spare", 00:15:04.215 "progress": { 00:15:04.215 "blocks": 20480, 00:15:04.215 "percent": 32 00:15:04.215 } 00:15:04.215 }, 00:15:04.215 "base_bdevs_list": [ 00:15:04.215 { 00:15:04.215 "name": "spare", 00:15:04.215 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:15:04.215 "is_configured": true, 00:15:04.215 "data_offset": 2048, 00:15:04.215 "data_size": 63488 00:15:04.215 }, 00:15:04.215 { 00:15:04.215 "name": null, 00:15:04.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.215 "is_configured": false, 00:15:04.215 "data_offset": 2048, 00:15:04.215 "data_size": 63488 00:15:04.215 }, 00:15:04.215 { 00:15:04.215 "name": "BaseBdev3", 00:15:04.215 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:04.215 "is_configured": true, 00:15:04.215 "data_offset": 2048, 00:15:04.215 "data_size": 63488 00:15:04.215 }, 00:15:04.215 { 00:15:04.215 "name": "BaseBdev4", 00:15:04.215 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:04.215 "is_configured": true, 00:15:04.215 "data_offset": 2048, 00:15:04.215 "data_size": 63488 00:15:04.215 } 00:15:04.215 ] 00:15:04.215 }' 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.215 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.215 [2024-09-28 16:16:18.895014] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.474 [2024-09-28 16:16:18.938856] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.474 [2024-09-28 16:16:18.938915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.474 [2024-09-28 16:16:18.938930] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.474 [2024-09-28 16:16:18.938938] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.474 16:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.474 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.474 "name": "raid_bdev1", 00:15:04.474 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:04.474 "strip_size_kb": 0, 00:15:04.474 "state": "online", 00:15:04.474 "raid_level": "raid1", 00:15:04.474 "superblock": true, 00:15:04.475 "num_base_bdevs": 4, 00:15:04.475 "num_base_bdevs_discovered": 2, 00:15:04.475 "num_base_bdevs_operational": 2, 00:15:04.475 "base_bdevs_list": [ 00:15:04.475 { 00:15:04.475 "name": null, 00:15:04.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.475 "is_configured": false, 00:15:04.475 "data_offset": 0, 00:15:04.475 "data_size": 63488 00:15:04.475 }, 00:15:04.475 { 00:15:04.475 "name": null, 00:15:04.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.475 "is_configured": false, 00:15:04.475 "data_offset": 2048, 00:15:04.475 "data_size": 63488 00:15:04.475 }, 00:15:04.475 { 00:15:04.475 "name": "BaseBdev3", 00:15:04.475 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:04.475 "is_configured": true, 00:15:04.475 "data_offset": 2048, 00:15:04.475 "data_size": 63488 00:15:04.475 }, 00:15:04.475 { 00:15:04.475 "name": "BaseBdev4", 00:15:04.475 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:04.475 "is_configured": true, 00:15:04.475 "data_offset": 2048, 00:15:04.475 "data_size": 63488 00:15:04.475 } 00:15:04.475 ] 00:15:04.475 }' 00:15:04.475 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.475 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.043 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.043 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.043 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.043 [2024-09-28 16:16:19.440971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.043 [2024-09-28 16:16:19.441080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.043 [2024-09-28 16:16:19.441119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:05.043 [2024-09-28 16:16:19.441150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.043 [2024-09-28 16:16:19.441623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.043 [2024-09-28 16:16:19.441687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.043 [2024-09-28 16:16:19.441789] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:05.043 [2024-09-28 16:16:19.441830] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:05.043 [2024-09-28 16:16:19.441869] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:05.043 [2024-09-28 16:16:19.441938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.043 [2024-09-28 16:16:19.454700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:05.043 spare 00:15:05.043 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.043 [2024-09-28 16:16:19.456645] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.043 16:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.980 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.980 "name": "raid_bdev1", 00:15:05.980 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:05.980 "strip_size_kb": 0, 00:15:05.980 "state": "online", 00:15:05.980 "raid_level": "raid1", 00:15:05.980 "superblock": true, 00:15:05.981 "num_base_bdevs": 4, 00:15:05.981 "num_base_bdevs_discovered": 3, 00:15:05.981 "num_base_bdevs_operational": 3, 00:15:05.981 "process": { 00:15:05.981 "type": "rebuild", 00:15:05.981 "target": "spare", 00:15:05.981 "progress": { 00:15:05.981 "blocks": 20480, 00:15:05.981 "percent": 32 00:15:05.981 } 00:15:05.981 }, 00:15:05.981 "base_bdevs_list": [ 00:15:05.981 { 00:15:05.981 "name": "spare", 00:15:05.981 "uuid": "0852d6fc-5e02-56de-abec-d68e99a4d856", 00:15:05.981 "is_configured": true, 00:15:05.981 "data_offset": 2048, 00:15:05.981 "data_size": 63488 00:15:05.981 }, 00:15:05.981 { 00:15:05.981 "name": null, 00:15:05.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.981 "is_configured": false, 00:15:05.981 "data_offset": 2048, 00:15:05.981 "data_size": 63488 00:15:05.981 }, 00:15:05.981 { 00:15:05.981 "name": "BaseBdev3", 00:15:05.981 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:05.981 "is_configured": true, 00:15:05.981 "data_offset": 2048, 00:15:05.981 "data_size": 63488 00:15:05.981 }, 00:15:05.981 { 00:15:05.981 "name": "BaseBdev4", 00:15:05.981 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:05.981 "is_configured": true, 00:15:05.981 "data_offset": 2048, 00:15:05.981 "data_size": 63488 00:15:05.981 } 00:15:05.981 ] 00:15:05.981 }' 00:15:05.981 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.981 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.981 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.981 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.981 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:05.981 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.981 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.981 [2024-09-28 16:16:20.596399] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.981 [2024-09-28 16:16:20.661274] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.981 [2024-09-28 16:16:20.661324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.981 [2024-09-28 16:16:20.661343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.981 [2024-09-28 16:16:20.661350] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.240 "name": "raid_bdev1", 00:15:06.240 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:06.240 "strip_size_kb": 0, 00:15:06.240 "state": "online", 00:15:06.240 "raid_level": "raid1", 00:15:06.240 "superblock": true, 00:15:06.240 "num_base_bdevs": 4, 00:15:06.240 "num_base_bdevs_discovered": 2, 00:15:06.240 "num_base_bdevs_operational": 2, 00:15:06.240 "base_bdevs_list": [ 00:15:06.240 { 00:15:06.240 "name": null, 00:15:06.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.240 "is_configured": false, 00:15:06.240 "data_offset": 0, 00:15:06.240 "data_size": 63488 00:15:06.240 }, 00:15:06.240 { 00:15:06.240 "name": null, 00:15:06.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.240 "is_configured": false, 00:15:06.240 "data_offset": 2048, 00:15:06.240 "data_size": 63488 00:15:06.240 }, 00:15:06.240 { 00:15:06.240 "name": "BaseBdev3", 00:15:06.240 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:06.240 "is_configured": true, 00:15:06.240 "data_offset": 2048, 00:15:06.240 "data_size": 63488 00:15:06.240 }, 00:15:06.240 { 00:15:06.240 "name": "BaseBdev4", 00:15:06.240 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:06.240 "is_configured": true, 00:15:06.240 "data_offset": 2048, 00:15:06.240 "data_size": 63488 00:15:06.240 } 00:15:06.240 ] 00:15:06.240 }' 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.240 16:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.499 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.758 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.758 "name": "raid_bdev1", 00:15:06.758 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:06.758 "strip_size_kb": 0, 00:15:06.758 "state": "online", 00:15:06.758 "raid_level": "raid1", 00:15:06.758 "superblock": true, 00:15:06.758 "num_base_bdevs": 4, 00:15:06.758 "num_base_bdevs_discovered": 2, 00:15:06.758 "num_base_bdevs_operational": 2, 00:15:06.758 "base_bdevs_list": [ 00:15:06.758 { 00:15:06.758 "name": null, 00:15:06.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.758 "is_configured": false, 00:15:06.758 "data_offset": 0, 00:15:06.758 "data_size": 63488 00:15:06.758 }, 00:15:06.758 { 00:15:06.758 "name": null, 00:15:06.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.758 "is_configured": false, 00:15:06.758 "data_offset": 2048, 00:15:06.758 "data_size": 63488 00:15:06.758 }, 00:15:06.758 { 00:15:06.758 "name": "BaseBdev3", 00:15:06.758 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:06.758 "is_configured": true, 00:15:06.758 "data_offset": 2048, 00:15:06.758 "data_size": 63488 00:15:06.758 }, 00:15:06.758 { 00:15:06.758 "name": "BaseBdev4", 00:15:06.758 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:06.758 "is_configured": true, 00:15:06.758 "data_offset": 2048, 00:15:06.758 "data_size": 63488 00:15:06.758 } 00:15:06.758 ] 00:15:06.758 }' 00:15:06.758 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.759 [2024-09-28 16:16:21.307430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:06.759 [2024-09-28 16:16:21.307484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.759 [2024-09-28 16:16:21.307504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:06.759 [2024-09-28 16:16:21.307513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.759 [2024-09-28 16:16:21.307907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.759 [2024-09-28 16:16:21.307935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:06.759 [2024-09-28 16:16:21.308007] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:06.759 [2024-09-28 16:16:21.308020] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:06.759 [2024-09-28 16:16:21.308032] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:06.759 [2024-09-28 16:16:21.308042] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:06.759 BaseBdev1 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.759 16:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.693 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.694 "name": "raid_bdev1", 00:15:07.694 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:07.694 "strip_size_kb": 0, 00:15:07.694 "state": "online", 00:15:07.694 "raid_level": "raid1", 00:15:07.694 "superblock": true, 00:15:07.694 "num_base_bdevs": 4, 00:15:07.694 "num_base_bdevs_discovered": 2, 00:15:07.694 "num_base_bdevs_operational": 2, 00:15:07.694 "base_bdevs_list": [ 00:15:07.694 { 00:15:07.694 "name": null, 00:15:07.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.694 "is_configured": false, 00:15:07.694 "data_offset": 0, 00:15:07.694 "data_size": 63488 00:15:07.694 }, 00:15:07.694 { 00:15:07.694 "name": null, 00:15:07.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.694 "is_configured": false, 00:15:07.694 "data_offset": 2048, 00:15:07.694 "data_size": 63488 00:15:07.694 }, 00:15:07.694 { 00:15:07.694 "name": "BaseBdev3", 00:15:07.694 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:07.694 "is_configured": true, 00:15:07.694 "data_offset": 2048, 00:15:07.694 "data_size": 63488 00:15:07.694 }, 00:15:07.694 { 00:15:07.694 "name": "BaseBdev4", 00:15:07.694 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:07.694 "is_configured": true, 00:15:07.694 "data_offset": 2048, 00:15:07.694 "data_size": 63488 00:15:07.694 } 00:15:07.694 ] 00:15:07.694 }' 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.694 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.261 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.261 "name": "raid_bdev1", 00:15:08.261 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:08.261 "strip_size_kb": 0, 00:15:08.261 "state": "online", 00:15:08.261 "raid_level": "raid1", 00:15:08.261 "superblock": true, 00:15:08.261 "num_base_bdevs": 4, 00:15:08.261 "num_base_bdevs_discovered": 2, 00:15:08.261 "num_base_bdevs_operational": 2, 00:15:08.261 "base_bdevs_list": [ 00:15:08.261 { 00:15:08.261 "name": null, 00:15:08.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.261 "is_configured": false, 00:15:08.261 "data_offset": 0, 00:15:08.261 "data_size": 63488 00:15:08.261 }, 00:15:08.261 { 00:15:08.261 "name": null, 00:15:08.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.261 "is_configured": false, 00:15:08.261 "data_offset": 2048, 00:15:08.262 "data_size": 63488 00:15:08.262 }, 00:15:08.262 { 00:15:08.262 "name": "BaseBdev3", 00:15:08.262 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:08.262 "is_configured": true, 00:15:08.262 "data_offset": 2048, 00:15:08.262 "data_size": 63488 00:15:08.262 }, 00:15:08.262 { 00:15:08.262 "name": "BaseBdev4", 00:15:08.262 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:08.262 "is_configured": true, 00:15:08.262 "data_offset": 2048, 00:15:08.262 "data_size": 63488 00:15:08.262 } 00:15:08.262 ] 00:15:08.262 }' 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.262 [2024-09-28 16:16:22.924855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.262 [2024-09-28 16:16:22.925032] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:08.262 [2024-09-28 16:16:22.925050] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:08.262 request: 00:15:08.262 { 00:15:08.262 "base_bdev": "BaseBdev1", 00:15:08.262 "raid_bdev": "raid_bdev1", 00:15:08.262 "method": "bdev_raid_add_base_bdev", 00:15:08.262 "req_id": 1 00:15:08.262 } 00:15:08.262 Got JSON-RPC error response 00:15:08.262 response: 00:15:08.262 { 00:15:08.262 "code": -22, 00:15:08.262 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:08.262 } 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.262 16:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.637 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.638 "name": "raid_bdev1", 00:15:09.638 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:09.638 "strip_size_kb": 0, 00:15:09.638 "state": "online", 00:15:09.638 "raid_level": "raid1", 00:15:09.638 "superblock": true, 00:15:09.638 "num_base_bdevs": 4, 00:15:09.638 "num_base_bdevs_discovered": 2, 00:15:09.638 "num_base_bdevs_operational": 2, 00:15:09.638 "base_bdevs_list": [ 00:15:09.638 { 00:15:09.638 "name": null, 00:15:09.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.638 "is_configured": false, 00:15:09.638 "data_offset": 0, 00:15:09.638 "data_size": 63488 00:15:09.638 }, 00:15:09.638 { 00:15:09.638 "name": null, 00:15:09.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.638 "is_configured": false, 00:15:09.638 "data_offset": 2048, 00:15:09.638 "data_size": 63488 00:15:09.638 }, 00:15:09.638 { 00:15:09.638 "name": "BaseBdev3", 00:15:09.638 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:09.638 "is_configured": true, 00:15:09.638 "data_offset": 2048, 00:15:09.638 "data_size": 63488 00:15:09.638 }, 00:15:09.638 { 00:15:09.638 "name": "BaseBdev4", 00:15:09.638 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:09.638 "is_configured": true, 00:15:09.638 "data_offset": 2048, 00:15:09.638 "data_size": 63488 00:15:09.638 } 00:15:09.638 ] 00:15:09.638 }' 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.638 16:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.896 "name": "raid_bdev1", 00:15:09.896 "uuid": "4879cf84-3e06-481e-bf71-05e2a7e185c9", 00:15:09.896 "strip_size_kb": 0, 00:15:09.896 "state": "online", 00:15:09.896 "raid_level": "raid1", 00:15:09.896 "superblock": true, 00:15:09.896 "num_base_bdevs": 4, 00:15:09.896 "num_base_bdevs_discovered": 2, 00:15:09.896 "num_base_bdevs_operational": 2, 00:15:09.896 "base_bdevs_list": [ 00:15:09.896 { 00:15:09.896 "name": null, 00:15:09.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.896 "is_configured": false, 00:15:09.896 "data_offset": 0, 00:15:09.896 "data_size": 63488 00:15:09.896 }, 00:15:09.896 { 00:15:09.896 "name": null, 00:15:09.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.896 "is_configured": false, 00:15:09.896 "data_offset": 2048, 00:15:09.896 "data_size": 63488 00:15:09.896 }, 00:15:09.896 { 00:15:09.896 "name": "BaseBdev3", 00:15:09.896 "uuid": "c354b897-b331-5e89-a247-15711edc4c01", 00:15:09.896 "is_configured": true, 00:15:09.896 "data_offset": 2048, 00:15:09.896 "data_size": 63488 00:15:09.896 }, 00:15:09.896 { 00:15:09.896 "name": "BaseBdev4", 00:15:09.896 "uuid": "c86d5474-851c-5f38-b996-7f52a1379737", 00:15:09.896 "is_configured": true, 00:15:09.896 "data_offset": 2048, 00:15:09.896 "data_size": 63488 00:15:09.896 } 00:15:09.896 ] 00:15:09.896 }' 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79172 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79172 ']' 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79172 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.896 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79172 00:15:10.155 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.155 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.155 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79172' 00:15:10.155 killing process with pid 79172 00:15:10.155 Received shutdown signal, test time was about 17.957550 seconds 00:15:10.155 00:15:10.155 Latency(us) 00:15:10.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.155 =================================================================================================================== 00:15:10.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.155 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79172 00:15:10.155 [2024-09-28 16:16:24.600060] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.155 [2024-09-28 16:16:24.600164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.155 [2024-09-28 16:16:24.600237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.155 [2024-09-28 16:16:24.600251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:10.155 16:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79172 00:15:10.414 [2024-09-28 16:16:24.990220] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.798 16:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:11.798 00:15:11.798 real 0m21.454s 00:15:11.798 user 0m28.009s 00:15:11.798 sys 0m2.786s 00:15:11.798 16:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.798 16:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.798 ************************************ 00:15:11.798 END TEST raid_rebuild_test_sb_io 00:15:11.798 ************************************ 00:15:11.798 16:16:26 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:11.798 16:16:26 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:11.798 16:16:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:11.798 16:16:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.798 16:16:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.798 ************************************ 00:15:11.798 START TEST raid5f_state_function_test 00:15:11.798 ************************************ 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79894 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:11.798 Process raid pid: 79894 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79894' 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79894 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79894 ']' 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.798 16:16:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.798 [2024-09-28 16:16:26.413927] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:11.798 [2024-09-28 16:16:26.414057] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.058 [2024-09-28 16:16:26.584315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.318 [2024-09-28 16:16:26.781624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.318 [2024-09-28 16:16:26.967480] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.318 [2024-09-28 16:16:26.967514] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.578 [2024-09-28 16:16:27.222802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.578 [2024-09-28 16:16:27.222855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.578 [2024-09-28 16:16:27.222865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.578 [2024-09-28 16:16:27.222874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.578 [2024-09-28 16:16:27.222879] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.578 [2024-09-28 16:16:27.222888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.578 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.838 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.838 "name": "Existed_Raid", 00:15:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.838 "strip_size_kb": 64, 00:15:12.838 "state": "configuring", 00:15:12.838 "raid_level": "raid5f", 00:15:12.838 "superblock": false, 00:15:12.838 "num_base_bdevs": 3, 00:15:12.838 "num_base_bdevs_discovered": 0, 00:15:12.838 "num_base_bdevs_operational": 3, 00:15:12.838 "base_bdevs_list": [ 00:15:12.838 { 00:15:12.838 "name": "BaseBdev1", 00:15:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.838 "is_configured": false, 00:15:12.838 "data_offset": 0, 00:15:12.838 "data_size": 0 00:15:12.838 }, 00:15:12.838 { 00:15:12.838 "name": "BaseBdev2", 00:15:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.838 "is_configured": false, 00:15:12.838 "data_offset": 0, 00:15:12.838 "data_size": 0 00:15:12.838 }, 00:15:12.838 { 00:15:12.838 "name": "BaseBdev3", 00:15:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.838 "is_configured": false, 00:15:12.838 "data_offset": 0, 00:15:12.838 "data_size": 0 00:15:12.838 } 00:15:12.838 ] 00:15:12.838 }' 00:15:12.838 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.838 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.098 [2024-09-28 16:16:27.653996] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.098 [2024-09-28 16:16:27.654086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.098 [2024-09-28 16:16:27.665992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.098 [2024-09-28 16:16:27.666077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.098 [2024-09-28 16:16:27.666103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.098 [2024-09-28 16:16:27.666123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.098 [2024-09-28 16:16:27.666140] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.098 [2024-09-28 16:16:27.666159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.098 [2024-09-28 16:16:27.750827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.098 BaseBdev1 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.098 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.098 [ 00:15:13.098 { 00:15:13.098 "name": "BaseBdev1", 00:15:13.098 "aliases": [ 00:15:13.098 "20c82a6f-de63-45ed-9888-956a7071d034" 00:15:13.401 ], 00:15:13.401 "product_name": "Malloc disk", 00:15:13.401 "block_size": 512, 00:15:13.401 "num_blocks": 65536, 00:15:13.401 "uuid": "20c82a6f-de63-45ed-9888-956a7071d034", 00:15:13.401 "assigned_rate_limits": { 00:15:13.401 "rw_ios_per_sec": 0, 00:15:13.401 "rw_mbytes_per_sec": 0, 00:15:13.401 "r_mbytes_per_sec": 0, 00:15:13.401 "w_mbytes_per_sec": 0 00:15:13.401 }, 00:15:13.401 "claimed": true, 00:15:13.401 "claim_type": "exclusive_write", 00:15:13.401 "zoned": false, 00:15:13.401 "supported_io_types": { 00:15:13.401 "read": true, 00:15:13.401 "write": true, 00:15:13.401 "unmap": true, 00:15:13.401 "flush": true, 00:15:13.401 "reset": true, 00:15:13.401 "nvme_admin": false, 00:15:13.401 "nvme_io": false, 00:15:13.401 "nvme_io_md": false, 00:15:13.401 "write_zeroes": true, 00:15:13.401 "zcopy": true, 00:15:13.401 "get_zone_info": false, 00:15:13.401 "zone_management": false, 00:15:13.401 "zone_append": false, 00:15:13.401 "compare": false, 00:15:13.401 "compare_and_write": false, 00:15:13.401 "abort": true, 00:15:13.401 "seek_hole": false, 00:15:13.401 "seek_data": false, 00:15:13.401 "copy": true, 00:15:13.401 "nvme_iov_md": false 00:15:13.401 }, 00:15:13.401 "memory_domains": [ 00:15:13.401 { 00:15:13.401 "dma_device_id": "system", 00:15:13.401 "dma_device_type": 1 00:15:13.401 }, 00:15:13.401 { 00:15:13.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.401 "dma_device_type": 2 00:15:13.401 } 00:15:13.401 ], 00:15:13.401 "driver_specific": {} 00:15:13.401 } 00:15:13.401 ] 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.401 "name": "Existed_Raid", 00:15:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.401 "strip_size_kb": 64, 00:15:13.401 "state": "configuring", 00:15:13.401 "raid_level": "raid5f", 00:15:13.401 "superblock": false, 00:15:13.401 "num_base_bdevs": 3, 00:15:13.401 "num_base_bdevs_discovered": 1, 00:15:13.401 "num_base_bdevs_operational": 3, 00:15:13.401 "base_bdevs_list": [ 00:15:13.401 { 00:15:13.401 "name": "BaseBdev1", 00:15:13.401 "uuid": "20c82a6f-de63-45ed-9888-956a7071d034", 00:15:13.401 "is_configured": true, 00:15:13.401 "data_offset": 0, 00:15:13.401 "data_size": 65536 00:15:13.401 }, 00:15:13.401 { 00:15:13.401 "name": "BaseBdev2", 00:15:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.401 "is_configured": false, 00:15:13.401 "data_offset": 0, 00:15:13.401 "data_size": 0 00:15:13.401 }, 00:15:13.401 { 00:15:13.401 "name": "BaseBdev3", 00:15:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.401 "is_configured": false, 00:15:13.401 "data_offset": 0, 00:15:13.401 "data_size": 0 00:15:13.401 } 00:15:13.401 ] 00:15:13.401 }' 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.401 16:16:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.679 [2024-09-28 16:16:28.234008] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.679 [2024-09-28 16:16:28.234096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.679 [2024-09-28 16:16:28.246027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.679 [2024-09-28 16:16:28.247845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.679 [2024-09-28 16:16:28.247924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.679 [2024-09-28 16:16:28.247951] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.679 [2024-09-28 16:16:28.247974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.679 "name": "Existed_Raid", 00:15:13.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.679 "strip_size_kb": 64, 00:15:13.679 "state": "configuring", 00:15:13.679 "raid_level": "raid5f", 00:15:13.679 "superblock": false, 00:15:13.679 "num_base_bdevs": 3, 00:15:13.679 "num_base_bdevs_discovered": 1, 00:15:13.679 "num_base_bdevs_operational": 3, 00:15:13.679 "base_bdevs_list": [ 00:15:13.679 { 00:15:13.679 "name": "BaseBdev1", 00:15:13.679 "uuid": "20c82a6f-de63-45ed-9888-956a7071d034", 00:15:13.679 "is_configured": true, 00:15:13.679 "data_offset": 0, 00:15:13.679 "data_size": 65536 00:15:13.679 }, 00:15:13.679 { 00:15:13.679 "name": "BaseBdev2", 00:15:13.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.679 "is_configured": false, 00:15:13.679 "data_offset": 0, 00:15:13.679 "data_size": 0 00:15:13.679 }, 00:15:13.679 { 00:15:13.679 "name": "BaseBdev3", 00:15:13.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.679 "is_configured": false, 00:15:13.679 "data_offset": 0, 00:15:13.679 "data_size": 0 00:15:13.679 } 00:15:13.679 ] 00:15:13.679 }' 00:15:13.679 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.680 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.293 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.293 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.293 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.294 [2024-09-28 16:16:28.756166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.294 BaseBdev2 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.294 [ 00:15:14.294 { 00:15:14.294 "name": "BaseBdev2", 00:15:14.294 "aliases": [ 00:15:14.294 "3521dddd-41bb-4c1f-abe1-4b5b9a146b11" 00:15:14.294 ], 00:15:14.294 "product_name": "Malloc disk", 00:15:14.294 "block_size": 512, 00:15:14.294 "num_blocks": 65536, 00:15:14.294 "uuid": "3521dddd-41bb-4c1f-abe1-4b5b9a146b11", 00:15:14.294 "assigned_rate_limits": { 00:15:14.294 "rw_ios_per_sec": 0, 00:15:14.294 "rw_mbytes_per_sec": 0, 00:15:14.294 "r_mbytes_per_sec": 0, 00:15:14.294 "w_mbytes_per_sec": 0 00:15:14.294 }, 00:15:14.294 "claimed": true, 00:15:14.294 "claim_type": "exclusive_write", 00:15:14.294 "zoned": false, 00:15:14.294 "supported_io_types": { 00:15:14.294 "read": true, 00:15:14.294 "write": true, 00:15:14.294 "unmap": true, 00:15:14.294 "flush": true, 00:15:14.294 "reset": true, 00:15:14.294 "nvme_admin": false, 00:15:14.294 "nvme_io": false, 00:15:14.294 "nvme_io_md": false, 00:15:14.294 "write_zeroes": true, 00:15:14.294 "zcopy": true, 00:15:14.294 "get_zone_info": false, 00:15:14.294 "zone_management": false, 00:15:14.294 "zone_append": false, 00:15:14.294 "compare": false, 00:15:14.294 "compare_and_write": false, 00:15:14.294 "abort": true, 00:15:14.294 "seek_hole": false, 00:15:14.294 "seek_data": false, 00:15:14.294 "copy": true, 00:15:14.294 "nvme_iov_md": false 00:15:14.294 }, 00:15:14.294 "memory_domains": [ 00:15:14.294 { 00:15:14.294 "dma_device_id": "system", 00:15:14.294 "dma_device_type": 1 00:15:14.294 }, 00:15:14.294 { 00:15:14.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.294 "dma_device_type": 2 00:15:14.294 } 00:15:14.294 ], 00:15:14.294 "driver_specific": {} 00:15:14.294 } 00:15:14.294 ] 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.294 "name": "Existed_Raid", 00:15:14.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.294 "strip_size_kb": 64, 00:15:14.294 "state": "configuring", 00:15:14.294 "raid_level": "raid5f", 00:15:14.294 "superblock": false, 00:15:14.294 "num_base_bdevs": 3, 00:15:14.294 "num_base_bdevs_discovered": 2, 00:15:14.294 "num_base_bdevs_operational": 3, 00:15:14.294 "base_bdevs_list": [ 00:15:14.294 { 00:15:14.294 "name": "BaseBdev1", 00:15:14.294 "uuid": "20c82a6f-de63-45ed-9888-956a7071d034", 00:15:14.294 "is_configured": true, 00:15:14.294 "data_offset": 0, 00:15:14.294 "data_size": 65536 00:15:14.294 }, 00:15:14.294 { 00:15:14.294 "name": "BaseBdev2", 00:15:14.294 "uuid": "3521dddd-41bb-4c1f-abe1-4b5b9a146b11", 00:15:14.294 "is_configured": true, 00:15:14.294 "data_offset": 0, 00:15:14.294 "data_size": 65536 00:15:14.294 }, 00:15:14.294 { 00:15:14.294 "name": "BaseBdev3", 00:15:14.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.294 "is_configured": false, 00:15:14.294 "data_offset": 0, 00:15:14.294 "data_size": 0 00:15:14.294 } 00:15:14.294 ] 00:15:14.294 }' 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.294 16:16:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.554 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:14.554 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.554 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.815 [2024-09-28 16:16:29.258522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.815 [2024-09-28 16:16:29.258649] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:14.815 [2024-09-28 16:16:29.258672] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:14.815 [2024-09-28 16:16:29.258909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:14.815 [2024-09-28 16:16:29.264483] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:14.815 [2024-09-28 16:16:29.264504] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:14.815 [2024-09-28 16:16:29.264745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.815 BaseBdev3 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.815 [ 00:15:14.815 { 00:15:14.815 "name": "BaseBdev3", 00:15:14.815 "aliases": [ 00:15:14.815 "3c3c7470-198b-4242-b18d-5ea9c231a62f" 00:15:14.815 ], 00:15:14.815 "product_name": "Malloc disk", 00:15:14.815 "block_size": 512, 00:15:14.815 "num_blocks": 65536, 00:15:14.815 "uuid": "3c3c7470-198b-4242-b18d-5ea9c231a62f", 00:15:14.815 "assigned_rate_limits": { 00:15:14.815 "rw_ios_per_sec": 0, 00:15:14.815 "rw_mbytes_per_sec": 0, 00:15:14.815 "r_mbytes_per_sec": 0, 00:15:14.815 "w_mbytes_per_sec": 0 00:15:14.815 }, 00:15:14.815 "claimed": true, 00:15:14.815 "claim_type": "exclusive_write", 00:15:14.815 "zoned": false, 00:15:14.815 "supported_io_types": { 00:15:14.815 "read": true, 00:15:14.815 "write": true, 00:15:14.815 "unmap": true, 00:15:14.815 "flush": true, 00:15:14.815 "reset": true, 00:15:14.815 "nvme_admin": false, 00:15:14.815 "nvme_io": false, 00:15:14.815 "nvme_io_md": false, 00:15:14.815 "write_zeroes": true, 00:15:14.815 "zcopy": true, 00:15:14.815 "get_zone_info": false, 00:15:14.815 "zone_management": false, 00:15:14.815 "zone_append": false, 00:15:14.815 "compare": false, 00:15:14.815 "compare_and_write": false, 00:15:14.815 "abort": true, 00:15:14.815 "seek_hole": false, 00:15:14.815 "seek_data": false, 00:15:14.815 "copy": true, 00:15:14.815 "nvme_iov_md": false 00:15:14.815 }, 00:15:14.815 "memory_domains": [ 00:15:14.815 { 00:15:14.815 "dma_device_id": "system", 00:15:14.815 "dma_device_type": 1 00:15:14.815 }, 00:15:14.815 { 00:15:14.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.815 "dma_device_type": 2 00:15:14.815 } 00:15:14.815 ], 00:15:14.815 "driver_specific": {} 00:15:14.815 } 00:15:14.815 ] 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.815 "name": "Existed_Raid", 00:15:14.815 "uuid": "eb3c5c85-dc00-444d-bb78-645e0bddcdfb", 00:15:14.815 "strip_size_kb": 64, 00:15:14.815 "state": "online", 00:15:14.815 "raid_level": "raid5f", 00:15:14.815 "superblock": false, 00:15:14.815 "num_base_bdevs": 3, 00:15:14.815 "num_base_bdevs_discovered": 3, 00:15:14.815 "num_base_bdevs_operational": 3, 00:15:14.815 "base_bdevs_list": [ 00:15:14.815 { 00:15:14.815 "name": "BaseBdev1", 00:15:14.815 "uuid": "20c82a6f-de63-45ed-9888-956a7071d034", 00:15:14.815 "is_configured": true, 00:15:14.815 "data_offset": 0, 00:15:14.815 "data_size": 65536 00:15:14.815 }, 00:15:14.815 { 00:15:14.815 "name": "BaseBdev2", 00:15:14.815 "uuid": "3521dddd-41bb-4c1f-abe1-4b5b9a146b11", 00:15:14.815 "is_configured": true, 00:15:14.815 "data_offset": 0, 00:15:14.815 "data_size": 65536 00:15:14.815 }, 00:15:14.815 { 00:15:14.815 "name": "BaseBdev3", 00:15:14.815 "uuid": "3c3c7470-198b-4242-b18d-5ea9c231a62f", 00:15:14.815 "is_configured": true, 00:15:14.815 "data_offset": 0, 00:15:14.815 "data_size": 65536 00:15:14.815 } 00:15:14.815 ] 00:15:14.815 }' 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.815 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.386 [2024-09-28 16:16:29.789903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:15.386 "name": "Existed_Raid", 00:15:15.386 "aliases": [ 00:15:15.386 "eb3c5c85-dc00-444d-bb78-645e0bddcdfb" 00:15:15.386 ], 00:15:15.386 "product_name": "Raid Volume", 00:15:15.386 "block_size": 512, 00:15:15.386 "num_blocks": 131072, 00:15:15.386 "uuid": "eb3c5c85-dc00-444d-bb78-645e0bddcdfb", 00:15:15.386 "assigned_rate_limits": { 00:15:15.386 "rw_ios_per_sec": 0, 00:15:15.386 "rw_mbytes_per_sec": 0, 00:15:15.386 "r_mbytes_per_sec": 0, 00:15:15.386 "w_mbytes_per_sec": 0 00:15:15.386 }, 00:15:15.386 "claimed": false, 00:15:15.386 "zoned": false, 00:15:15.386 "supported_io_types": { 00:15:15.386 "read": true, 00:15:15.386 "write": true, 00:15:15.386 "unmap": false, 00:15:15.386 "flush": false, 00:15:15.386 "reset": true, 00:15:15.386 "nvme_admin": false, 00:15:15.386 "nvme_io": false, 00:15:15.386 "nvme_io_md": false, 00:15:15.386 "write_zeroes": true, 00:15:15.386 "zcopy": false, 00:15:15.386 "get_zone_info": false, 00:15:15.386 "zone_management": false, 00:15:15.386 "zone_append": false, 00:15:15.386 "compare": false, 00:15:15.386 "compare_and_write": false, 00:15:15.386 "abort": false, 00:15:15.386 "seek_hole": false, 00:15:15.386 "seek_data": false, 00:15:15.386 "copy": false, 00:15:15.386 "nvme_iov_md": false 00:15:15.386 }, 00:15:15.386 "driver_specific": { 00:15:15.386 "raid": { 00:15:15.386 "uuid": "eb3c5c85-dc00-444d-bb78-645e0bddcdfb", 00:15:15.386 "strip_size_kb": 64, 00:15:15.386 "state": "online", 00:15:15.386 "raid_level": "raid5f", 00:15:15.386 "superblock": false, 00:15:15.386 "num_base_bdevs": 3, 00:15:15.386 "num_base_bdevs_discovered": 3, 00:15:15.386 "num_base_bdevs_operational": 3, 00:15:15.386 "base_bdevs_list": [ 00:15:15.386 { 00:15:15.386 "name": "BaseBdev1", 00:15:15.386 "uuid": "20c82a6f-de63-45ed-9888-956a7071d034", 00:15:15.386 "is_configured": true, 00:15:15.386 "data_offset": 0, 00:15:15.386 "data_size": 65536 00:15:15.386 }, 00:15:15.386 { 00:15:15.386 "name": "BaseBdev2", 00:15:15.386 "uuid": "3521dddd-41bb-4c1f-abe1-4b5b9a146b11", 00:15:15.386 "is_configured": true, 00:15:15.386 "data_offset": 0, 00:15:15.386 "data_size": 65536 00:15:15.386 }, 00:15:15.386 { 00:15:15.386 "name": "BaseBdev3", 00:15:15.386 "uuid": "3c3c7470-198b-4242-b18d-5ea9c231a62f", 00:15:15.386 "is_configured": true, 00:15:15.386 "data_offset": 0, 00:15:15.386 "data_size": 65536 00:15:15.386 } 00:15:15.386 ] 00:15:15.386 } 00:15:15.386 } 00:15:15.386 }' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:15.386 BaseBdev2 00:15:15.386 BaseBdev3' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.386 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.387 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.387 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.387 16:16:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:15.387 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.387 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.387 16:16:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.387 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.648 [2024-09-28 16:16:30.077259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.648 "name": "Existed_Raid", 00:15:15.648 "uuid": "eb3c5c85-dc00-444d-bb78-645e0bddcdfb", 00:15:15.648 "strip_size_kb": 64, 00:15:15.648 "state": "online", 00:15:15.648 "raid_level": "raid5f", 00:15:15.648 "superblock": false, 00:15:15.648 "num_base_bdevs": 3, 00:15:15.648 "num_base_bdevs_discovered": 2, 00:15:15.648 "num_base_bdevs_operational": 2, 00:15:15.648 "base_bdevs_list": [ 00:15:15.648 { 00:15:15.648 "name": null, 00:15:15.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.648 "is_configured": false, 00:15:15.648 "data_offset": 0, 00:15:15.648 "data_size": 65536 00:15:15.648 }, 00:15:15.648 { 00:15:15.648 "name": "BaseBdev2", 00:15:15.648 "uuid": "3521dddd-41bb-4c1f-abe1-4b5b9a146b11", 00:15:15.648 "is_configured": true, 00:15:15.648 "data_offset": 0, 00:15:15.648 "data_size": 65536 00:15:15.648 }, 00:15:15.648 { 00:15:15.648 "name": "BaseBdev3", 00:15:15.648 "uuid": "3c3c7470-198b-4242-b18d-5ea9c231a62f", 00:15:15.648 "is_configured": true, 00:15:15.648 "data_offset": 0, 00:15:15.648 "data_size": 65536 00:15:15.648 } 00:15:15.648 ] 00:15:15.648 }' 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.648 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.218 [2024-09-28 16:16:30.673062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.218 [2024-09-28 16:16:30.673204] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.218 [2024-09-28 16:16:30.761115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.218 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.218 [2024-09-28 16:16:30.821030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:16.218 [2024-09-28 16:16:30.821125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.479 16:16:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.479 BaseBdev2 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.479 [ 00:15:16.479 { 00:15:16.479 "name": "BaseBdev2", 00:15:16.479 "aliases": [ 00:15:16.479 "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61" 00:15:16.479 ], 00:15:16.479 "product_name": "Malloc disk", 00:15:16.479 "block_size": 512, 00:15:16.479 "num_blocks": 65536, 00:15:16.479 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:16.479 "assigned_rate_limits": { 00:15:16.479 "rw_ios_per_sec": 0, 00:15:16.479 "rw_mbytes_per_sec": 0, 00:15:16.479 "r_mbytes_per_sec": 0, 00:15:16.479 "w_mbytes_per_sec": 0 00:15:16.479 }, 00:15:16.479 "claimed": false, 00:15:16.479 "zoned": false, 00:15:16.479 "supported_io_types": { 00:15:16.479 "read": true, 00:15:16.479 "write": true, 00:15:16.479 "unmap": true, 00:15:16.479 "flush": true, 00:15:16.479 "reset": true, 00:15:16.479 "nvme_admin": false, 00:15:16.479 "nvme_io": false, 00:15:16.479 "nvme_io_md": false, 00:15:16.479 "write_zeroes": true, 00:15:16.479 "zcopy": true, 00:15:16.479 "get_zone_info": false, 00:15:16.479 "zone_management": false, 00:15:16.479 "zone_append": false, 00:15:16.479 "compare": false, 00:15:16.479 "compare_and_write": false, 00:15:16.479 "abort": true, 00:15:16.479 "seek_hole": false, 00:15:16.479 "seek_data": false, 00:15:16.479 "copy": true, 00:15:16.479 "nvme_iov_md": false 00:15:16.479 }, 00:15:16.479 "memory_domains": [ 00:15:16.479 { 00:15:16.479 "dma_device_id": "system", 00:15:16.479 "dma_device_type": 1 00:15:16.479 }, 00:15:16.479 { 00:15:16.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.479 "dma_device_type": 2 00:15:16.479 } 00:15:16.479 ], 00:15:16.479 "driver_specific": {} 00:15:16.479 } 00:15:16.479 ] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.479 BaseBdev3 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.479 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.479 [ 00:15:16.479 { 00:15:16.479 "name": "BaseBdev3", 00:15:16.479 "aliases": [ 00:15:16.479 "75b7992c-085e-4fb5-b566-2febacf52d31" 00:15:16.479 ], 00:15:16.479 "product_name": "Malloc disk", 00:15:16.479 "block_size": 512, 00:15:16.479 "num_blocks": 65536, 00:15:16.479 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:16.479 "assigned_rate_limits": { 00:15:16.479 "rw_ios_per_sec": 0, 00:15:16.479 "rw_mbytes_per_sec": 0, 00:15:16.479 "r_mbytes_per_sec": 0, 00:15:16.479 "w_mbytes_per_sec": 0 00:15:16.479 }, 00:15:16.479 "claimed": false, 00:15:16.479 "zoned": false, 00:15:16.479 "supported_io_types": { 00:15:16.479 "read": true, 00:15:16.479 "write": true, 00:15:16.479 "unmap": true, 00:15:16.479 "flush": true, 00:15:16.479 "reset": true, 00:15:16.479 "nvme_admin": false, 00:15:16.479 "nvme_io": false, 00:15:16.479 "nvme_io_md": false, 00:15:16.479 "write_zeroes": true, 00:15:16.479 "zcopy": true, 00:15:16.479 "get_zone_info": false, 00:15:16.479 "zone_management": false, 00:15:16.479 "zone_append": false, 00:15:16.479 "compare": false, 00:15:16.479 "compare_and_write": false, 00:15:16.479 "abort": true, 00:15:16.479 "seek_hole": false, 00:15:16.479 "seek_data": false, 00:15:16.479 "copy": true, 00:15:16.479 "nvme_iov_md": false 00:15:16.479 }, 00:15:16.479 "memory_domains": [ 00:15:16.479 { 00:15:16.479 "dma_device_id": "system", 00:15:16.480 "dma_device_type": 1 00:15:16.480 }, 00:15:16.480 { 00:15:16.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.480 "dma_device_type": 2 00:15:16.480 } 00:15:16.480 ], 00:15:16.480 "driver_specific": {} 00:15:16.480 } 00:15:16.480 ] 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.480 [2024-09-28 16:16:31.128813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:16.480 [2024-09-28 16:16:31.128866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:16.480 [2024-09-28 16:16:31.128884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.480 [2024-09-28 16:16:31.130543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.480 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.739 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.739 "name": "Existed_Raid", 00:15:16.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.739 "strip_size_kb": 64, 00:15:16.739 "state": "configuring", 00:15:16.739 "raid_level": "raid5f", 00:15:16.739 "superblock": false, 00:15:16.739 "num_base_bdevs": 3, 00:15:16.739 "num_base_bdevs_discovered": 2, 00:15:16.739 "num_base_bdevs_operational": 3, 00:15:16.739 "base_bdevs_list": [ 00:15:16.739 { 00:15:16.739 "name": "BaseBdev1", 00:15:16.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.739 "is_configured": false, 00:15:16.739 "data_offset": 0, 00:15:16.739 "data_size": 0 00:15:16.739 }, 00:15:16.739 { 00:15:16.739 "name": "BaseBdev2", 00:15:16.739 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:16.739 "is_configured": true, 00:15:16.739 "data_offset": 0, 00:15:16.739 "data_size": 65536 00:15:16.739 }, 00:15:16.739 { 00:15:16.739 "name": "BaseBdev3", 00:15:16.739 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:16.739 "is_configured": true, 00:15:16.739 "data_offset": 0, 00:15:16.739 "data_size": 65536 00:15:16.739 } 00:15:16.739 ] 00:15:16.739 }' 00:15:16.739 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.739 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.999 [2024-09-28 16:16:31.576031] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.999 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.999 "name": "Existed_Raid", 00:15:16.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.999 "strip_size_kb": 64, 00:15:16.999 "state": "configuring", 00:15:16.999 "raid_level": "raid5f", 00:15:16.999 "superblock": false, 00:15:16.999 "num_base_bdevs": 3, 00:15:16.999 "num_base_bdevs_discovered": 1, 00:15:16.999 "num_base_bdevs_operational": 3, 00:15:16.999 "base_bdevs_list": [ 00:15:16.999 { 00:15:16.999 "name": "BaseBdev1", 00:15:16.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.999 "is_configured": false, 00:15:16.999 "data_offset": 0, 00:15:16.999 "data_size": 0 00:15:16.999 }, 00:15:16.999 { 00:15:16.999 "name": null, 00:15:16.999 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:16.999 "is_configured": false, 00:15:16.999 "data_offset": 0, 00:15:17.000 "data_size": 65536 00:15:17.000 }, 00:15:17.000 { 00:15:17.000 "name": "BaseBdev3", 00:15:17.000 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:17.000 "is_configured": true, 00:15:17.000 "data_offset": 0, 00:15:17.000 "data_size": 65536 00:15:17.000 } 00:15:17.000 ] 00:15:17.000 }' 00:15:17.000 16:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.000 16:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.570 [2024-09-28 16:16:32.074798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.570 BaseBdev1 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.570 [ 00:15:17.570 { 00:15:17.570 "name": "BaseBdev1", 00:15:17.570 "aliases": [ 00:15:17.570 "21031562-47b0-45a9-aeaf-91cc500999f7" 00:15:17.570 ], 00:15:17.570 "product_name": "Malloc disk", 00:15:17.570 "block_size": 512, 00:15:17.570 "num_blocks": 65536, 00:15:17.570 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:17.570 "assigned_rate_limits": { 00:15:17.570 "rw_ios_per_sec": 0, 00:15:17.570 "rw_mbytes_per_sec": 0, 00:15:17.570 "r_mbytes_per_sec": 0, 00:15:17.570 "w_mbytes_per_sec": 0 00:15:17.570 }, 00:15:17.570 "claimed": true, 00:15:17.570 "claim_type": "exclusive_write", 00:15:17.570 "zoned": false, 00:15:17.570 "supported_io_types": { 00:15:17.570 "read": true, 00:15:17.570 "write": true, 00:15:17.570 "unmap": true, 00:15:17.570 "flush": true, 00:15:17.570 "reset": true, 00:15:17.570 "nvme_admin": false, 00:15:17.570 "nvme_io": false, 00:15:17.570 "nvme_io_md": false, 00:15:17.570 "write_zeroes": true, 00:15:17.570 "zcopy": true, 00:15:17.570 "get_zone_info": false, 00:15:17.570 "zone_management": false, 00:15:17.570 "zone_append": false, 00:15:17.570 "compare": false, 00:15:17.570 "compare_and_write": false, 00:15:17.570 "abort": true, 00:15:17.570 "seek_hole": false, 00:15:17.570 "seek_data": false, 00:15:17.570 "copy": true, 00:15:17.570 "nvme_iov_md": false 00:15:17.570 }, 00:15:17.570 "memory_domains": [ 00:15:17.570 { 00:15:17.570 "dma_device_id": "system", 00:15:17.570 "dma_device_type": 1 00:15:17.570 }, 00:15:17.570 { 00:15:17.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.570 "dma_device_type": 2 00:15:17.570 } 00:15:17.570 ], 00:15:17.570 "driver_specific": {} 00:15:17.570 } 00:15:17.570 ] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.570 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.570 "name": "Existed_Raid", 00:15:17.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.570 "strip_size_kb": 64, 00:15:17.570 "state": "configuring", 00:15:17.570 "raid_level": "raid5f", 00:15:17.570 "superblock": false, 00:15:17.570 "num_base_bdevs": 3, 00:15:17.570 "num_base_bdevs_discovered": 2, 00:15:17.570 "num_base_bdevs_operational": 3, 00:15:17.570 "base_bdevs_list": [ 00:15:17.570 { 00:15:17.570 "name": "BaseBdev1", 00:15:17.570 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:17.570 "is_configured": true, 00:15:17.570 "data_offset": 0, 00:15:17.570 "data_size": 65536 00:15:17.570 }, 00:15:17.570 { 00:15:17.570 "name": null, 00:15:17.570 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:17.570 "is_configured": false, 00:15:17.570 "data_offset": 0, 00:15:17.570 "data_size": 65536 00:15:17.570 }, 00:15:17.570 { 00:15:17.570 "name": "BaseBdev3", 00:15:17.570 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:17.570 "is_configured": true, 00:15:17.570 "data_offset": 0, 00:15:17.570 "data_size": 65536 00:15:17.570 } 00:15:17.570 ] 00:15:17.571 }' 00:15:17.571 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.571 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.140 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:18.140 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.140 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.140 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.140 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.141 [2024-09-28 16:16:32.609916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.141 "name": "Existed_Raid", 00:15:18.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.141 "strip_size_kb": 64, 00:15:18.141 "state": "configuring", 00:15:18.141 "raid_level": "raid5f", 00:15:18.141 "superblock": false, 00:15:18.141 "num_base_bdevs": 3, 00:15:18.141 "num_base_bdevs_discovered": 1, 00:15:18.141 "num_base_bdevs_operational": 3, 00:15:18.141 "base_bdevs_list": [ 00:15:18.141 { 00:15:18.141 "name": "BaseBdev1", 00:15:18.141 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:18.141 "is_configured": true, 00:15:18.141 "data_offset": 0, 00:15:18.141 "data_size": 65536 00:15:18.141 }, 00:15:18.141 { 00:15:18.141 "name": null, 00:15:18.141 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:18.141 "is_configured": false, 00:15:18.141 "data_offset": 0, 00:15:18.141 "data_size": 65536 00:15:18.141 }, 00:15:18.141 { 00:15:18.141 "name": null, 00:15:18.141 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:18.141 "is_configured": false, 00:15:18.141 "data_offset": 0, 00:15:18.141 "data_size": 65536 00:15:18.141 } 00:15:18.141 ] 00:15:18.141 }' 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.141 16:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.401 [2024-09-28 16:16:33.057201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.401 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.661 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.661 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.661 "name": "Existed_Raid", 00:15:18.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.661 "strip_size_kb": 64, 00:15:18.661 "state": "configuring", 00:15:18.661 "raid_level": "raid5f", 00:15:18.661 "superblock": false, 00:15:18.661 "num_base_bdevs": 3, 00:15:18.661 "num_base_bdevs_discovered": 2, 00:15:18.661 "num_base_bdevs_operational": 3, 00:15:18.661 "base_bdevs_list": [ 00:15:18.661 { 00:15:18.661 "name": "BaseBdev1", 00:15:18.661 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:18.661 "is_configured": true, 00:15:18.661 "data_offset": 0, 00:15:18.661 "data_size": 65536 00:15:18.661 }, 00:15:18.661 { 00:15:18.661 "name": null, 00:15:18.661 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:18.661 "is_configured": false, 00:15:18.661 "data_offset": 0, 00:15:18.661 "data_size": 65536 00:15:18.661 }, 00:15:18.661 { 00:15:18.661 "name": "BaseBdev3", 00:15:18.661 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:18.661 "is_configured": true, 00:15:18.661 "data_offset": 0, 00:15:18.661 "data_size": 65536 00:15:18.661 } 00:15:18.661 ] 00:15:18.661 }' 00:15:18.661 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.661 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.922 [2024-09-28 16:16:33.496469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.922 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.182 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.182 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.182 "name": "Existed_Raid", 00:15:19.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.182 "strip_size_kb": 64, 00:15:19.182 "state": "configuring", 00:15:19.182 "raid_level": "raid5f", 00:15:19.182 "superblock": false, 00:15:19.182 "num_base_bdevs": 3, 00:15:19.182 "num_base_bdevs_discovered": 1, 00:15:19.182 "num_base_bdevs_operational": 3, 00:15:19.182 "base_bdevs_list": [ 00:15:19.182 { 00:15:19.182 "name": null, 00:15:19.182 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:19.182 "is_configured": false, 00:15:19.182 "data_offset": 0, 00:15:19.182 "data_size": 65536 00:15:19.182 }, 00:15:19.182 { 00:15:19.182 "name": null, 00:15:19.182 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:19.182 "is_configured": false, 00:15:19.182 "data_offset": 0, 00:15:19.182 "data_size": 65536 00:15:19.182 }, 00:15:19.182 { 00:15:19.182 "name": "BaseBdev3", 00:15:19.182 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:19.182 "is_configured": true, 00:15:19.182 "data_offset": 0, 00:15:19.182 "data_size": 65536 00:15:19.182 } 00:15:19.182 ] 00:15:19.182 }' 00:15:19.182 16:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.182 16:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.443 [2024-09-28 16:16:34.088520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.443 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.703 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.703 "name": "Existed_Raid", 00:15:19.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.703 "strip_size_kb": 64, 00:15:19.703 "state": "configuring", 00:15:19.703 "raid_level": "raid5f", 00:15:19.703 "superblock": false, 00:15:19.703 "num_base_bdevs": 3, 00:15:19.703 "num_base_bdevs_discovered": 2, 00:15:19.703 "num_base_bdevs_operational": 3, 00:15:19.703 "base_bdevs_list": [ 00:15:19.703 { 00:15:19.703 "name": null, 00:15:19.703 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:19.703 "is_configured": false, 00:15:19.703 "data_offset": 0, 00:15:19.703 "data_size": 65536 00:15:19.703 }, 00:15:19.703 { 00:15:19.703 "name": "BaseBdev2", 00:15:19.703 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:19.703 "is_configured": true, 00:15:19.703 "data_offset": 0, 00:15:19.703 "data_size": 65536 00:15:19.703 }, 00:15:19.703 { 00:15:19.703 "name": "BaseBdev3", 00:15:19.703 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:19.703 "is_configured": true, 00:15:19.703 "data_offset": 0, 00:15:19.703 "data_size": 65536 00:15:19.703 } 00:15:19.703 ] 00:15:19.703 }' 00:15:19.703 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.703 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 21031562-47b0-45a9-aeaf-91cc500999f7 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.962 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.223 [2024-09-28 16:16:34.679598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:20.223 [2024-09-28 16:16:34.679695] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:20.223 [2024-09-28 16:16:34.679722] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:20.223 [2024-09-28 16:16:34.679981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:20.223 [2024-09-28 16:16:34.685117] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:20.223 [2024-09-28 16:16:34.685176] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:20.223 [2024-09-28 16:16:34.685451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.223 NewBaseBdev 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.223 [ 00:15:20.223 { 00:15:20.223 "name": "NewBaseBdev", 00:15:20.223 "aliases": [ 00:15:20.223 "21031562-47b0-45a9-aeaf-91cc500999f7" 00:15:20.223 ], 00:15:20.223 "product_name": "Malloc disk", 00:15:20.223 "block_size": 512, 00:15:20.223 "num_blocks": 65536, 00:15:20.223 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:20.223 "assigned_rate_limits": { 00:15:20.223 "rw_ios_per_sec": 0, 00:15:20.223 "rw_mbytes_per_sec": 0, 00:15:20.223 "r_mbytes_per_sec": 0, 00:15:20.223 "w_mbytes_per_sec": 0 00:15:20.223 }, 00:15:20.223 "claimed": true, 00:15:20.223 "claim_type": "exclusive_write", 00:15:20.223 "zoned": false, 00:15:20.223 "supported_io_types": { 00:15:20.223 "read": true, 00:15:20.223 "write": true, 00:15:20.223 "unmap": true, 00:15:20.223 "flush": true, 00:15:20.223 "reset": true, 00:15:20.223 "nvme_admin": false, 00:15:20.223 "nvme_io": false, 00:15:20.223 "nvme_io_md": false, 00:15:20.223 "write_zeroes": true, 00:15:20.223 "zcopy": true, 00:15:20.223 "get_zone_info": false, 00:15:20.223 "zone_management": false, 00:15:20.223 "zone_append": false, 00:15:20.223 "compare": false, 00:15:20.223 "compare_and_write": false, 00:15:20.223 "abort": true, 00:15:20.223 "seek_hole": false, 00:15:20.223 "seek_data": false, 00:15:20.223 "copy": true, 00:15:20.223 "nvme_iov_md": false 00:15:20.223 }, 00:15:20.223 "memory_domains": [ 00:15:20.223 { 00:15:20.223 "dma_device_id": "system", 00:15:20.223 "dma_device_type": 1 00:15:20.223 }, 00:15:20.223 { 00:15:20.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.223 "dma_device_type": 2 00:15:20.223 } 00:15:20.223 ], 00:15:20.223 "driver_specific": {} 00:15:20.223 } 00:15:20.223 ] 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.223 "name": "Existed_Raid", 00:15:20.223 "uuid": "59fa0d14-01e2-4b4c-ba7f-6cc4f1cd119f", 00:15:20.223 "strip_size_kb": 64, 00:15:20.223 "state": "online", 00:15:20.223 "raid_level": "raid5f", 00:15:20.223 "superblock": false, 00:15:20.223 "num_base_bdevs": 3, 00:15:20.223 "num_base_bdevs_discovered": 3, 00:15:20.223 "num_base_bdevs_operational": 3, 00:15:20.223 "base_bdevs_list": [ 00:15:20.223 { 00:15:20.223 "name": "NewBaseBdev", 00:15:20.223 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:20.223 "is_configured": true, 00:15:20.223 "data_offset": 0, 00:15:20.223 "data_size": 65536 00:15:20.223 }, 00:15:20.223 { 00:15:20.223 "name": "BaseBdev2", 00:15:20.223 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:20.223 "is_configured": true, 00:15:20.223 "data_offset": 0, 00:15:20.223 "data_size": 65536 00:15:20.223 }, 00:15:20.223 { 00:15:20.223 "name": "BaseBdev3", 00:15:20.223 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:20.223 "is_configured": true, 00:15:20.223 "data_offset": 0, 00:15:20.223 "data_size": 65536 00:15:20.223 } 00:15:20.223 ] 00:15:20.223 }' 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.223 16:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.483 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:20.483 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:20.483 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:20.483 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:20.483 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:20.483 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.744 [2024-09-28 16:16:35.178780] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:20.744 "name": "Existed_Raid", 00:15:20.744 "aliases": [ 00:15:20.744 "59fa0d14-01e2-4b4c-ba7f-6cc4f1cd119f" 00:15:20.744 ], 00:15:20.744 "product_name": "Raid Volume", 00:15:20.744 "block_size": 512, 00:15:20.744 "num_blocks": 131072, 00:15:20.744 "uuid": "59fa0d14-01e2-4b4c-ba7f-6cc4f1cd119f", 00:15:20.744 "assigned_rate_limits": { 00:15:20.744 "rw_ios_per_sec": 0, 00:15:20.744 "rw_mbytes_per_sec": 0, 00:15:20.744 "r_mbytes_per_sec": 0, 00:15:20.744 "w_mbytes_per_sec": 0 00:15:20.744 }, 00:15:20.744 "claimed": false, 00:15:20.744 "zoned": false, 00:15:20.744 "supported_io_types": { 00:15:20.744 "read": true, 00:15:20.744 "write": true, 00:15:20.744 "unmap": false, 00:15:20.744 "flush": false, 00:15:20.744 "reset": true, 00:15:20.744 "nvme_admin": false, 00:15:20.744 "nvme_io": false, 00:15:20.744 "nvme_io_md": false, 00:15:20.744 "write_zeroes": true, 00:15:20.744 "zcopy": false, 00:15:20.744 "get_zone_info": false, 00:15:20.744 "zone_management": false, 00:15:20.744 "zone_append": false, 00:15:20.744 "compare": false, 00:15:20.744 "compare_and_write": false, 00:15:20.744 "abort": false, 00:15:20.744 "seek_hole": false, 00:15:20.744 "seek_data": false, 00:15:20.744 "copy": false, 00:15:20.744 "nvme_iov_md": false 00:15:20.744 }, 00:15:20.744 "driver_specific": { 00:15:20.744 "raid": { 00:15:20.744 "uuid": "59fa0d14-01e2-4b4c-ba7f-6cc4f1cd119f", 00:15:20.744 "strip_size_kb": 64, 00:15:20.744 "state": "online", 00:15:20.744 "raid_level": "raid5f", 00:15:20.744 "superblock": false, 00:15:20.744 "num_base_bdevs": 3, 00:15:20.744 "num_base_bdevs_discovered": 3, 00:15:20.744 "num_base_bdevs_operational": 3, 00:15:20.744 "base_bdevs_list": [ 00:15:20.744 { 00:15:20.744 "name": "NewBaseBdev", 00:15:20.744 "uuid": "21031562-47b0-45a9-aeaf-91cc500999f7", 00:15:20.744 "is_configured": true, 00:15:20.744 "data_offset": 0, 00:15:20.744 "data_size": 65536 00:15:20.744 }, 00:15:20.744 { 00:15:20.744 "name": "BaseBdev2", 00:15:20.744 "uuid": "fcbf3d3e-e18a-4835-bb0e-26bbb199aa61", 00:15:20.744 "is_configured": true, 00:15:20.744 "data_offset": 0, 00:15:20.744 "data_size": 65536 00:15:20.744 }, 00:15:20.744 { 00:15:20.744 "name": "BaseBdev3", 00:15:20.744 "uuid": "75b7992c-085e-4fb5-b566-2febacf52d31", 00:15:20.744 "is_configured": true, 00:15:20.744 "data_offset": 0, 00:15:20.744 "data_size": 65536 00:15:20.744 } 00:15:20.744 ] 00:15:20.744 } 00:15:20.744 } 00:15:20.744 }' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:20.744 BaseBdev2 00:15:20.744 BaseBdev3' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.744 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.005 [2024-09-28 16:16:35.478110] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.005 [2024-09-28 16:16:35.478181] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.005 [2024-09-28 16:16:35.478270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.005 [2024-09-28 16:16:35.478550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.005 [2024-09-28 16:16:35.478606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79894 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79894 ']' 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79894 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79894 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.005 killing process with pid 79894 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79894' 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79894 00:15:21.005 [2024-09-28 16:16:35.525065] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.005 16:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79894 00:15:21.265 [2024-09-28 16:16:35.806276] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.646 16:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:22.646 00:15:22.646 real 0m10.692s 00:15:22.646 user 0m16.856s 00:15:22.646 sys 0m2.086s 00:15:22.646 16:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.647 16:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 ************************************ 00:15:22.647 END TEST raid5f_state_function_test 00:15:22.647 ************************************ 00:15:22.647 16:16:37 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:22.647 16:16:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:22.647 16:16:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.647 16:16:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 ************************************ 00:15:22.647 START TEST raid5f_state_function_test_sb 00:15:22.647 ************************************ 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80517 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80517' 00:15:22.647 Process raid pid: 80517 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80517 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80517 ']' 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.647 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 [2024-09-28 16:16:37.187962] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:22.647 [2024-09-28 16:16:37.188235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.908 [2024-09-28 16:16:37.358772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.908 [2024-09-28 16:16:37.557411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.167 [2024-09-28 16:16:37.754438] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.167 [2024-09-28 16:16:37.754471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.427 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.427 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:23.428 16:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:23.428 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.428 16:16:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 [2024-09-28 16:16:38.006127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.428 [2024-09-28 16:16:38.006184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.428 [2024-09-28 16:16:38.006194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.428 [2024-09-28 16:16:38.006203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.428 [2024-09-28 16:16:38.006209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.428 [2024-09-28 16:16:38.006217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.428 "name": "Existed_Raid", 00:15:23.428 "uuid": "1ae6d053-4b5a-4941-b865-fbc40a47ad35", 00:15:23.428 "strip_size_kb": 64, 00:15:23.428 "state": "configuring", 00:15:23.428 "raid_level": "raid5f", 00:15:23.428 "superblock": true, 00:15:23.428 "num_base_bdevs": 3, 00:15:23.428 "num_base_bdevs_discovered": 0, 00:15:23.428 "num_base_bdevs_operational": 3, 00:15:23.428 "base_bdevs_list": [ 00:15:23.428 { 00:15:23.428 "name": "BaseBdev1", 00:15:23.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.428 "is_configured": false, 00:15:23.428 "data_offset": 0, 00:15:23.428 "data_size": 0 00:15:23.428 }, 00:15:23.428 { 00:15:23.428 "name": "BaseBdev2", 00:15:23.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.428 "is_configured": false, 00:15:23.428 "data_offset": 0, 00:15:23.428 "data_size": 0 00:15:23.428 }, 00:15:23.428 { 00:15:23.428 "name": "BaseBdev3", 00:15:23.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.428 "is_configured": false, 00:15:23.428 "data_offset": 0, 00:15:23.428 "data_size": 0 00:15:23.428 } 00:15:23.428 ] 00:15:23.428 }' 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.428 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.998 [2024-09-28 16:16:38.453295] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.998 [2024-09-28 16:16:38.453394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.998 [2024-09-28 16:16:38.465331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.998 [2024-09-28 16:16:38.465372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.998 [2024-09-28 16:16:38.465381] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.998 [2024-09-28 16:16:38.465390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.998 [2024-09-28 16:16:38.465395] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.998 [2024-09-28 16:16:38.465404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.998 [2024-09-28 16:16:38.542695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.998 BaseBdev1 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.998 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.998 [ 00:15:23.998 { 00:15:23.998 "name": "BaseBdev1", 00:15:23.998 "aliases": [ 00:15:23.998 "bfe62a8f-ca04-4b21-b9ad-4029ab6030fa" 00:15:23.998 ], 00:15:23.998 "product_name": "Malloc disk", 00:15:23.998 "block_size": 512, 00:15:23.998 "num_blocks": 65536, 00:15:23.998 "uuid": "bfe62a8f-ca04-4b21-b9ad-4029ab6030fa", 00:15:23.998 "assigned_rate_limits": { 00:15:23.998 "rw_ios_per_sec": 0, 00:15:23.998 "rw_mbytes_per_sec": 0, 00:15:23.998 "r_mbytes_per_sec": 0, 00:15:23.998 "w_mbytes_per_sec": 0 00:15:23.998 }, 00:15:23.998 "claimed": true, 00:15:23.998 "claim_type": "exclusive_write", 00:15:23.998 "zoned": false, 00:15:23.998 "supported_io_types": { 00:15:23.998 "read": true, 00:15:23.998 "write": true, 00:15:23.998 "unmap": true, 00:15:23.998 "flush": true, 00:15:23.998 "reset": true, 00:15:23.998 "nvme_admin": false, 00:15:23.998 "nvme_io": false, 00:15:23.998 "nvme_io_md": false, 00:15:23.998 "write_zeroes": true, 00:15:23.998 "zcopy": true, 00:15:23.998 "get_zone_info": false, 00:15:23.998 "zone_management": false, 00:15:23.998 "zone_append": false, 00:15:23.998 "compare": false, 00:15:23.998 "compare_and_write": false, 00:15:23.998 "abort": true, 00:15:23.999 "seek_hole": false, 00:15:23.999 "seek_data": false, 00:15:23.999 "copy": true, 00:15:23.999 "nvme_iov_md": false 00:15:23.999 }, 00:15:23.999 "memory_domains": [ 00:15:23.999 { 00:15:23.999 "dma_device_id": "system", 00:15:23.999 "dma_device_type": 1 00:15:23.999 }, 00:15:23.999 { 00:15:23.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.999 "dma_device_type": 2 00:15:23.999 } 00:15:23.999 ], 00:15:23.999 "driver_specific": {} 00:15:23.999 } 00:15:23.999 ] 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.999 "name": "Existed_Raid", 00:15:23.999 "uuid": "a5daa65c-9458-45e8-b4d5-3b2773a209bb", 00:15:23.999 "strip_size_kb": 64, 00:15:23.999 "state": "configuring", 00:15:23.999 "raid_level": "raid5f", 00:15:23.999 "superblock": true, 00:15:23.999 "num_base_bdevs": 3, 00:15:23.999 "num_base_bdevs_discovered": 1, 00:15:23.999 "num_base_bdevs_operational": 3, 00:15:23.999 "base_bdevs_list": [ 00:15:23.999 { 00:15:23.999 "name": "BaseBdev1", 00:15:23.999 "uuid": "bfe62a8f-ca04-4b21-b9ad-4029ab6030fa", 00:15:23.999 "is_configured": true, 00:15:23.999 "data_offset": 2048, 00:15:23.999 "data_size": 63488 00:15:23.999 }, 00:15:23.999 { 00:15:23.999 "name": "BaseBdev2", 00:15:23.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.999 "is_configured": false, 00:15:23.999 "data_offset": 0, 00:15:23.999 "data_size": 0 00:15:23.999 }, 00:15:23.999 { 00:15:23.999 "name": "BaseBdev3", 00:15:23.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.999 "is_configured": false, 00:15:23.999 "data_offset": 0, 00:15:23.999 "data_size": 0 00:15:23.999 } 00:15:23.999 ] 00:15:23.999 }' 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.999 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.569 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.569 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.569 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.569 [2024-09-28 16:16:38.993926] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.569 [2024-09-28 16:16:38.993965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:24.569 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.569 16:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.569 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.569 16:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.569 [2024-09-28 16:16:39.005955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.569 [2024-09-28 16:16:39.007677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.569 [2024-09-28 16:16:39.007719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.569 [2024-09-28 16:16:39.007729] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.569 [2024-09-28 16:16:39.007737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.569 "name": "Existed_Raid", 00:15:24.569 "uuid": "87506070-60a5-4ff2-a0c9-2b0e66c1854e", 00:15:24.569 "strip_size_kb": 64, 00:15:24.569 "state": "configuring", 00:15:24.569 "raid_level": "raid5f", 00:15:24.569 "superblock": true, 00:15:24.569 "num_base_bdevs": 3, 00:15:24.569 "num_base_bdevs_discovered": 1, 00:15:24.569 "num_base_bdevs_operational": 3, 00:15:24.569 "base_bdevs_list": [ 00:15:24.569 { 00:15:24.569 "name": "BaseBdev1", 00:15:24.569 "uuid": "bfe62a8f-ca04-4b21-b9ad-4029ab6030fa", 00:15:24.569 "is_configured": true, 00:15:24.569 "data_offset": 2048, 00:15:24.569 "data_size": 63488 00:15:24.569 }, 00:15:24.569 { 00:15:24.569 "name": "BaseBdev2", 00:15:24.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.569 "is_configured": false, 00:15:24.569 "data_offset": 0, 00:15:24.569 "data_size": 0 00:15:24.569 }, 00:15:24.569 { 00:15:24.569 "name": "BaseBdev3", 00:15:24.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.569 "is_configured": false, 00:15:24.569 "data_offset": 0, 00:15:24.569 "data_size": 0 00:15:24.569 } 00:15:24.569 ] 00:15:24.569 }' 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.569 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.829 [2024-09-28 16:16:39.489244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.829 BaseBdev2 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.829 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.089 [ 00:15:25.089 { 00:15:25.089 "name": "BaseBdev2", 00:15:25.089 "aliases": [ 00:15:25.089 "8c4230dd-7b92-4d9f-8479-bbd21494b776" 00:15:25.089 ], 00:15:25.089 "product_name": "Malloc disk", 00:15:25.089 "block_size": 512, 00:15:25.089 "num_blocks": 65536, 00:15:25.089 "uuid": "8c4230dd-7b92-4d9f-8479-bbd21494b776", 00:15:25.089 "assigned_rate_limits": { 00:15:25.089 "rw_ios_per_sec": 0, 00:15:25.089 "rw_mbytes_per_sec": 0, 00:15:25.089 "r_mbytes_per_sec": 0, 00:15:25.089 "w_mbytes_per_sec": 0 00:15:25.089 }, 00:15:25.089 "claimed": true, 00:15:25.089 "claim_type": "exclusive_write", 00:15:25.089 "zoned": false, 00:15:25.089 "supported_io_types": { 00:15:25.089 "read": true, 00:15:25.089 "write": true, 00:15:25.089 "unmap": true, 00:15:25.089 "flush": true, 00:15:25.089 "reset": true, 00:15:25.089 "nvme_admin": false, 00:15:25.089 "nvme_io": false, 00:15:25.089 "nvme_io_md": false, 00:15:25.089 "write_zeroes": true, 00:15:25.089 "zcopy": true, 00:15:25.089 "get_zone_info": false, 00:15:25.089 "zone_management": false, 00:15:25.089 "zone_append": false, 00:15:25.089 "compare": false, 00:15:25.089 "compare_and_write": false, 00:15:25.089 "abort": true, 00:15:25.089 "seek_hole": false, 00:15:25.089 "seek_data": false, 00:15:25.089 "copy": true, 00:15:25.089 "nvme_iov_md": false 00:15:25.089 }, 00:15:25.089 "memory_domains": [ 00:15:25.089 { 00:15:25.089 "dma_device_id": "system", 00:15:25.089 "dma_device_type": 1 00:15:25.089 }, 00:15:25.089 { 00:15:25.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.089 "dma_device_type": 2 00:15:25.089 } 00:15:25.089 ], 00:15:25.089 "driver_specific": {} 00:15:25.089 } 00:15:25.089 ] 00:15:25.089 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.089 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:25.089 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.089 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.089 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.089 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.090 "name": "Existed_Raid", 00:15:25.090 "uuid": "87506070-60a5-4ff2-a0c9-2b0e66c1854e", 00:15:25.090 "strip_size_kb": 64, 00:15:25.090 "state": "configuring", 00:15:25.090 "raid_level": "raid5f", 00:15:25.090 "superblock": true, 00:15:25.090 "num_base_bdevs": 3, 00:15:25.090 "num_base_bdevs_discovered": 2, 00:15:25.090 "num_base_bdevs_operational": 3, 00:15:25.090 "base_bdevs_list": [ 00:15:25.090 { 00:15:25.090 "name": "BaseBdev1", 00:15:25.090 "uuid": "bfe62a8f-ca04-4b21-b9ad-4029ab6030fa", 00:15:25.090 "is_configured": true, 00:15:25.090 "data_offset": 2048, 00:15:25.090 "data_size": 63488 00:15:25.090 }, 00:15:25.090 { 00:15:25.090 "name": "BaseBdev2", 00:15:25.090 "uuid": "8c4230dd-7b92-4d9f-8479-bbd21494b776", 00:15:25.090 "is_configured": true, 00:15:25.090 "data_offset": 2048, 00:15:25.090 "data_size": 63488 00:15:25.090 }, 00:15:25.090 { 00:15:25.090 "name": "BaseBdev3", 00:15:25.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.090 "is_configured": false, 00:15:25.090 "data_offset": 0, 00:15:25.090 "data_size": 0 00:15:25.090 } 00:15:25.090 ] 00:15:25.090 }' 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.090 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.350 [2024-09-28 16:16:39.936589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.350 [2024-09-28 16:16:39.936833] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:25.350 [2024-09-28 16:16:39.936856] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:25.350 [2024-09-28 16:16:39.937086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:25.350 BaseBdev3 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.350 [2024-09-28 16:16:39.942157] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:25.350 [2024-09-28 16:16:39.942181] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:25.350 [2024-09-28 16:16:39.942362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.350 [ 00:15:25.350 { 00:15:25.350 "name": "BaseBdev3", 00:15:25.350 "aliases": [ 00:15:25.350 "96cc3fe6-f4b7-4d92-bb1e-0fb935672d0b" 00:15:25.350 ], 00:15:25.350 "product_name": "Malloc disk", 00:15:25.350 "block_size": 512, 00:15:25.350 "num_blocks": 65536, 00:15:25.350 "uuid": "96cc3fe6-f4b7-4d92-bb1e-0fb935672d0b", 00:15:25.350 "assigned_rate_limits": { 00:15:25.350 "rw_ios_per_sec": 0, 00:15:25.350 "rw_mbytes_per_sec": 0, 00:15:25.350 "r_mbytes_per_sec": 0, 00:15:25.350 "w_mbytes_per_sec": 0 00:15:25.350 }, 00:15:25.350 "claimed": true, 00:15:25.350 "claim_type": "exclusive_write", 00:15:25.350 "zoned": false, 00:15:25.350 "supported_io_types": { 00:15:25.350 "read": true, 00:15:25.350 "write": true, 00:15:25.350 "unmap": true, 00:15:25.350 "flush": true, 00:15:25.350 "reset": true, 00:15:25.350 "nvme_admin": false, 00:15:25.350 "nvme_io": false, 00:15:25.350 "nvme_io_md": false, 00:15:25.350 "write_zeroes": true, 00:15:25.350 "zcopy": true, 00:15:25.350 "get_zone_info": false, 00:15:25.350 "zone_management": false, 00:15:25.350 "zone_append": false, 00:15:25.350 "compare": false, 00:15:25.350 "compare_and_write": false, 00:15:25.350 "abort": true, 00:15:25.350 "seek_hole": false, 00:15:25.350 "seek_data": false, 00:15:25.350 "copy": true, 00:15:25.350 "nvme_iov_md": false 00:15:25.350 }, 00:15:25.350 "memory_domains": [ 00:15:25.350 { 00:15:25.350 "dma_device_id": "system", 00:15:25.350 "dma_device_type": 1 00:15:25.350 }, 00:15:25.350 { 00:15:25.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.350 "dma_device_type": 2 00:15:25.350 } 00:15:25.350 ], 00:15:25.350 "driver_specific": {} 00:15:25.350 } 00:15:25.350 ] 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.350 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.351 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.351 16:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.351 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.351 16:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.351 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.351 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.351 "name": "Existed_Raid", 00:15:25.351 "uuid": "87506070-60a5-4ff2-a0c9-2b0e66c1854e", 00:15:25.351 "strip_size_kb": 64, 00:15:25.351 "state": "online", 00:15:25.351 "raid_level": "raid5f", 00:15:25.351 "superblock": true, 00:15:25.351 "num_base_bdevs": 3, 00:15:25.351 "num_base_bdevs_discovered": 3, 00:15:25.351 "num_base_bdevs_operational": 3, 00:15:25.351 "base_bdevs_list": [ 00:15:25.351 { 00:15:25.351 "name": "BaseBdev1", 00:15:25.351 "uuid": "bfe62a8f-ca04-4b21-b9ad-4029ab6030fa", 00:15:25.351 "is_configured": true, 00:15:25.351 "data_offset": 2048, 00:15:25.351 "data_size": 63488 00:15:25.351 }, 00:15:25.351 { 00:15:25.351 "name": "BaseBdev2", 00:15:25.351 "uuid": "8c4230dd-7b92-4d9f-8479-bbd21494b776", 00:15:25.351 "is_configured": true, 00:15:25.351 "data_offset": 2048, 00:15:25.351 "data_size": 63488 00:15:25.351 }, 00:15:25.351 { 00:15:25.351 "name": "BaseBdev3", 00:15:25.351 "uuid": "96cc3fe6-f4b7-4d92-bb1e-0fb935672d0b", 00:15:25.351 "is_configured": true, 00:15:25.351 "data_offset": 2048, 00:15:25.351 "data_size": 63488 00:15:25.351 } 00:15:25.351 ] 00:15:25.351 }' 00:15:25.351 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.351 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.921 [2024-09-28 16:16:40.427495] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.921 "name": "Existed_Raid", 00:15:25.921 "aliases": [ 00:15:25.921 "87506070-60a5-4ff2-a0c9-2b0e66c1854e" 00:15:25.921 ], 00:15:25.921 "product_name": "Raid Volume", 00:15:25.921 "block_size": 512, 00:15:25.921 "num_blocks": 126976, 00:15:25.921 "uuid": "87506070-60a5-4ff2-a0c9-2b0e66c1854e", 00:15:25.921 "assigned_rate_limits": { 00:15:25.921 "rw_ios_per_sec": 0, 00:15:25.921 "rw_mbytes_per_sec": 0, 00:15:25.921 "r_mbytes_per_sec": 0, 00:15:25.921 "w_mbytes_per_sec": 0 00:15:25.921 }, 00:15:25.921 "claimed": false, 00:15:25.921 "zoned": false, 00:15:25.921 "supported_io_types": { 00:15:25.921 "read": true, 00:15:25.921 "write": true, 00:15:25.921 "unmap": false, 00:15:25.921 "flush": false, 00:15:25.921 "reset": true, 00:15:25.921 "nvme_admin": false, 00:15:25.921 "nvme_io": false, 00:15:25.921 "nvme_io_md": false, 00:15:25.921 "write_zeroes": true, 00:15:25.921 "zcopy": false, 00:15:25.921 "get_zone_info": false, 00:15:25.921 "zone_management": false, 00:15:25.921 "zone_append": false, 00:15:25.921 "compare": false, 00:15:25.921 "compare_and_write": false, 00:15:25.921 "abort": false, 00:15:25.921 "seek_hole": false, 00:15:25.921 "seek_data": false, 00:15:25.921 "copy": false, 00:15:25.921 "nvme_iov_md": false 00:15:25.921 }, 00:15:25.921 "driver_specific": { 00:15:25.921 "raid": { 00:15:25.921 "uuid": "87506070-60a5-4ff2-a0c9-2b0e66c1854e", 00:15:25.921 "strip_size_kb": 64, 00:15:25.921 "state": "online", 00:15:25.921 "raid_level": "raid5f", 00:15:25.921 "superblock": true, 00:15:25.921 "num_base_bdevs": 3, 00:15:25.921 "num_base_bdevs_discovered": 3, 00:15:25.921 "num_base_bdevs_operational": 3, 00:15:25.921 "base_bdevs_list": [ 00:15:25.921 { 00:15:25.921 "name": "BaseBdev1", 00:15:25.921 "uuid": "bfe62a8f-ca04-4b21-b9ad-4029ab6030fa", 00:15:25.921 "is_configured": true, 00:15:25.921 "data_offset": 2048, 00:15:25.921 "data_size": 63488 00:15:25.921 }, 00:15:25.921 { 00:15:25.921 "name": "BaseBdev2", 00:15:25.921 "uuid": "8c4230dd-7b92-4d9f-8479-bbd21494b776", 00:15:25.921 "is_configured": true, 00:15:25.921 "data_offset": 2048, 00:15:25.921 "data_size": 63488 00:15:25.921 }, 00:15:25.921 { 00:15:25.921 "name": "BaseBdev3", 00:15:25.921 "uuid": "96cc3fe6-f4b7-4d92-bb1e-0fb935672d0b", 00:15:25.921 "is_configured": true, 00:15:25.921 "data_offset": 2048, 00:15:25.921 "data_size": 63488 00:15:25.921 } 00:15:25.921 ] 00:15:25.921 } 00:15:25.921 } 00:15:25.921 }' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:25.921 BaseBdev2 00:15:25.921 BaseBdev3' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.921 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.182 [2024-09-28 16:16:40.710879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.182 "name": "Existed_Raid", 00:15:26.182 "uuid": "87506070-60a5-4ff2-a0c9-2b0e66c1854e", 00:15:26.182 "strip_size_kb": 64, 00:15:26.182 "state": "online", 00:15:26.182 "raid_level": "raid5f", 00:15:26.182 "superblock": true, 00:15:26.182 "num_base_bdevs": 3, 00:15:26.182 "num_base_bdevs_discovered": 2, 00:15:26.182 "num_base_bdevs_operational": 2, 00:15:26.182 "base_bdevs_list": [ 00:15:26.182 { 00:15:26.182 "name": null, 00:15:26.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.182 "is_configured": false, 00:15:26.182 "data_offset": 0, 00:15:26.182 "data_size": 63488 00:15:26.182 }, 00:15:26.182 { 00:15:26.182 "name": "BaseBdev2", 00:15:26.182 "uuid": "8c4230dd-7b92-4d9f-8479-bbd21494b776", 00:15:26.182 "is_configured": true, 00:15:26.182 "data_offset": 2048, 00:15:26.182 "data_size": 63488 00:15:26.182 }, 00:15:26.182 { 00:15:26.182 "name": "BaseBdev3", 00:15:26.182 "uuid": "96cc3fe6-f4b7-4d92-bb1e-0fb935672d0b", 00:15:26.182 "is_configured": true, 00:15:26.182 "data_offset": 2048, 00:15:26.182 "data_size": 63488 00:15:26.182 } 00:15:26.182 ] 00:15:26.182 }' 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.182 16:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 [2024-09-28 16:16:41.295253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.752 [2024-09-28 16:16:41.295397] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.752 [2024-09-28 16:16:41.384600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:26.752 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.012 [2024-09-28 16:16:41.440506] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.012 [2024-09-28 16:16:41.440555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.012 BaseBdev2 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.012 [ 00:15:27.012 { 00:15:27.012 "name": "BaseBdev2", 00:15:27.012 "aliases": [ 00:15:27.012 "c7f94813-3079-42f5-a78a-4db0172c300e" 00:15:27.012 ], 00:15:27.012 "product_name": "Malloc disk", 00:15:27.012 "block_size": 512, 00:15:27.012 "num_blocks": 65536, 00:15:27.012 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:27.012 "assigned_rate_limits": { 00:15:27.012 "rw_ios_per_sec": 0, 00:15:27.012 "rw_mbytes_per_sec": 0, 00:15:27.012 "r_mbytes_per_sec": 0, 00:15:27.012 "w_mbytes_per_sec": 0 00:15:27.012 }, 00:15:27.012 "claimed": false, 00:15:27.012 "zoned": false, 00:15:27.012 "supported_io_types": { 00:15:27.012 "read": true, 00:15:27.012 "write": true, 00:15:27.012 "unmap": true, 00:15:27.012 "flush": true, 00:15:27.012 "reset": true, 00:15:27.012 "nvme_admin": false, 00:15:27.012 "nvme_io": false, 00:15:27.012 "nvme_io_md": false, 00:15:27.012 "write_zeroes": true, 00:15:27.012 "zcopy": true, 00:15:27.012 "get_zone_info": false, 00:15:27.012 "zone_management": false, 00:15:27.012 "zone_append": false, 00:15:27.012 "compare": false, 00:15:27.012 "compare_and_write": false, 00:15:27.012 "abort": true, 00:15:27.012 "seek_hole": false, 00:15:27.012 "seek_data": false, 00:15:27.012 "copy": true, 00:15:27.012 "nvme_iov_md": false 00:15:27.012 }, 00:15:27.012 "memory_domains": [ 00:15:27.012 { 00:15:27.012 "dma_device_id": "system", 00:15:27.012 "dma_device_type": 1 00:15:27.012 }, 00:15:27.012 { 00:15:27.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.012 "dma_device_type": 2 00:15:27.012 } 00:15:27.012 ], 00:15:27.012 "driver_specific": {} 00:15:27.012 } 00:15:27.012 ] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.012 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.272 BaseBdev3 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.272 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.273 [ 00:15:27.273 { 00:15:27.273 "name": "BaseBdev3", 00:15:27.273 "aliases": [ 00:15:27.273 "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14" 00:15:27.273 ], 00:15:27.273 "product_name": "Malloc disk", 00:15:27.273 "block_size": 512, 00:15:27.273 "num_blocks": 65536, 00:15:27.273 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:27.273 "assigned_rate_limits": { 00:15:27.273 "rw_ios_per_sec": 0, 00:15:27.273 "rw_mbytes_per_sec": 0, 00:15:27.273 "r_mbytes_per_sec": 0, 00:15:27.273 "w_mbytes_per_sec": 0 00:15:27.273 }, 00:15:27.273 "claimed": false, 00:15:27.273 "zoned": false, 00:15:27.273 "supported_io_types": { 00:15:27.273 "read": true, 00:15:27.273 "write": true, 00:15:27.273 "unmap": true, 00:15:27.273 "flush": true, 00:15:27.273 "reset": true, 00:15:27.273 "nvme_admin": false, 00:15:27.273 "nvme_io": false, 00:15:27.273 "nvme_io_md": false, 00:15:27.273 "write_zeroes": true, 00:15:27.273 "zcopy": true, 00:15:27.273 "get_zone_info": false, 00:15:27.273 "zone_management": false, 00:15:27.273 "zone_append": false, 00:15:27.273 "compare": false, 00:15:27.273 "compare_and_write": false, 00:15:27.273 "abort": true, 00:15:27.273 "seek_hole": false, 00:15:27.273 "seek_data": false, 00:15:27.273 "copy": true, 00:15:27.273 "nvme_iov_md": false 00:15:27.273 }, 00:15:27.273 "memory_domains": [ 00:15:27.273 { 00:15:27.273 "dma_device_id": "system", 00:15:27.273 "dma_device_type": 1 00:15:27.273 }, 00:15:27.273 { 00:15:27.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.273 "dma_device_type": 2 00:15:27.273 } 00:15:27.273 ], 00:15:27.273 "driver_specific": {} 00:15:27.273 } 00:15:27.273 ] 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.273 [2024-09-28 16:16:41.749695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.273 [2024-09-28 16:16:41.749821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.273 [2024-09-28 16:16:41.749859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.273 [2024-09-28 16:16:41.751528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.273 "name": "Existed_Raid", 00:15:27.273 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:27.273 "strip_size_kb": 64, 00:15:27.273 "state": "configuring", 00:15:27.273 "raid_level": "raid5f", 00:15:27.273 "superblock": true, 00:15:27.273 "num_base_bdevs": 3, 00:15:27.273 "num_base_bdevs_discovered": 2, 00:15:27.273 "num_base_bdevs_operational": 3, 00:15:27.273 "base_bdevs_list": [ 00:15:27.273 { 00:15:27.273 "name": "BaseBdev1", 00:15:27.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.273 "is_configured": false, 00:15:27.273 "data_offset": 0, 00:15:27.273 "data_size": 0 00:15:27.273 }, 00:15:27.273 { 00:15:27.273 "name": "BaseBdev2", 00:15:27.273 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:27.273 "is_configured": true, 00:15:27.273 "data_offset": 2048, 00:15:27.273 "data_size": 63488 00:15:27.273 }, 00:15:27.273 { 00:15:27.273 "name": "BaseBdev3", 00:15:27.273 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:27.273 "is_configured": true, 00:15:27.273 "data_offset": 2048, 00:15:27.273 "data_size": 63488 00:15:27.273 } 00:15:27.273 ] 00:15:27.273 }' 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.273 16:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.848 [2024-09-28 16:16:42.228806] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.848 "name": "Existed_Raid", 00:15:27.848 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:27.848 "strip_size_kb": 64, 00:15:27.848 "state": "configuring", 00:15:27.848 "raid_level": "raid5f", 00:15:27.848 "superblock": true, 00:15:27.848 "num_base_bdevs": 3, 00:15:27.848 "num_base_bdevs_discovered": 1, 00:15:27.848 "num_base_bdevs_operational": 3, 00:15:27.848 "base_bdevs_list": [ 00:15:27.848 { 00:15:27.848 "name": "BaseBdev1", 00:15:27.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.848 "is_configured": false, 00:15:27.848 "data_offset": 0, 00:15:27.848 "data_size": 0 00:15:27.848 }, 00:15:27.848 { 00:15:27.848 "name": null, 00:15:27.848 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:27.848 "is_configured": false, 00:15:27.848 "data_offset": 0, 00:15:27.848 "data_size": 63488 00:15:27.848 }, 00:15:27.848 { 00:15:27.848 "name": "BaseBdev3", 00:15:27.848 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:27.848 "is_configured": true, 00:15:27.848 "data_offset": 2048, 00:15:27.848 "data_size": 63488 00:15:27.848 } 00:15:27.848 ] 00:15:27.848 }' 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.848 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.108 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.108 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.108 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.108 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.108 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.108 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:28.108 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.109 [2024-09-28 16:16:42.719370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.109 BaseBdev1 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.109 [ 00:15:28.109 { 00:15:28.109 "name": "BaseBdev1", 00:15:28.109 "aliases": [ 00:15:28.109 "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7" 00:15:28.109 ], 00:15:28.109 "product_name": "Malloc disk", 00:15:28.109 "block_size": 512, 00:15:28.109 "num_blocks": 65536, 00:15:28.109 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:28.109 "assigned_rate_limits": { 00:15:28.109 "rw_ios_per_sec": 0, 00:15:28.109 "rw_mbytes_per_sec": 0, 00:15:28.109 "r_mbytes_per_sec": 0, 00:15:28.109 "w_mbytes_per_sec": 0 00:15:28.109 }, 00:15:28.109 "claimed": true, 00:15:28.109 "claim_type": "exclusive_write", 00:15:28.109 "zoned": false, 00:15:28.109 "supported_io_types": { 00:15:28.109 "read": true, 00:15:28.109 "write": true, 00:15:28.109 "unmap": true, 00:15:28.109 "flush": true, 00:15:28.109 "reset": true, 00:15:28.109 "nvme_admin": false, 00:15:28.109 "nvme_io": false, 00:15:28.109 "nvme_io_md": false, 00:15:28.109 "write_zeroes": true, 00:15:28.109 "zcopy": true, 00:15:28.109 "get_zone_info": false, 00:15:28.109 "zone_management": false, 00:15:28.109 "zone_append": false, 00:15:28.109 "compare": false, 00:15:28.109 "compare_and_write": false, 00:15:28.109 "abort": true, 00:15:28.109 "seek_hole": false, 00:15:28.109 "seek_data": false, 00:15:28.109 "copy": true, 00:15:28.109 "nvme_iov_md": false 00:15:28.109 }, 00:15:28.109 "memory_domains": [ 00:15:28.109 { 00:15:28.109 "dma_device_id": "system", 00:15:28.109 "dma_device_type": 1 00:15:28.109 }, 00:15:28.109 { 00:15:28.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.109 "dma_device_type": 2 00:15:28.109 } 00:15:28.109 ], 00:15:28.109 "driver_specific": {} 00:15:28.109 } 00:15:28.109 ] 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.109 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.369 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.369 "name": "Existed_Raid", 00:15:28.369 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:28.369 "strip_size_kb": 64, 00:15:28.369 "state": "configuring", 00:15:28.369 "raid_level": "raid5f", 00:15:28.369 "superblock": true, 00:15:28.369 "num_base_bdevs": 3, 00:15:28.369 "num_base_bdevs_discovered": 2, 00:15:28.369 "num_base_bdevs_operational": 3, 00:15:28.369 "base_bdevs_list": [ 00:15:28.369 { 00:15:28.369 "name": "BaseBdev1", 00:15:28.369 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:28.369 "is_configured": true, 00:15:28.369 "data_offset": 2048, 00:15:28.369 "data_size": 63488 00:15:28.369 }, 00:15:28.369 { 00:15:28.369 "name": null, 00:15:28.369 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:28.369 "is_configured": false, 00:15:28.369 "data_offset": 0, 00:15:28.369 "data_size": 63488 00:15:28.369 }, 00:15:28.369 { 00:15:28.369 "name": "BaseBdev3", 00:15:28.369 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:28.369 "is_configured": true, 00:15:28.369 "data_offset": 2048, 00:15:28.369 "data_size": 63488 00:15:28.369 } 00:15:28.369 ] 00:15:28.369 }' 00:15:28.369 16:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.369 16:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.628 [2024-09-28 16:16:43.214650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.628 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.628 "name": "Existed_Raid", 00:15:28.628 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:28.629 "strip_size_kb": 64, 00:15:28.629 "state": "configuring", 00:15:28.629 "raid_level": "raid5f", 00:15:28.629 "superblock": true, 00:15:28.629 "num_base_bdevs": 3, 00:15:28.629 "num_base_bdevs_discovered": 1, 00:15:28.629 "num_base_bdevs_operational": 3, 00:15:28.629 "base_bdevs_list": [ 00:15:28.629 { 00:15:28.629 "name": "BaseBdev1", 00:15:28.629 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:28.629 "is_configured": true, 00:15:28.629 "data_offset": 2048, 00:15:28.629 "data_size": 63488 00:15:28.629 }, 00:15:28.629 { 00:15:28.629 "name": null, 00:15:28.629 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:28.629 "is_configured": false, 00:15:28.629 "data_offset": 0, 00:15:28.629 "data_size": 63488 00:15:28.629 }, 00:15:28.629 { 00:15:28.629 "name": null, 00:15:28.629 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:28.629 "is_configured": false, 00:15:28.629 "data_offset": 0, 00:15:28.629 "data_size": 63488 00:15:28.629 } 00:15:28.629 ] 00:15:28.629 }' 00:15:28.629 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.629 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.198 [2024-09-28 16:16:43.717770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.198 "name": "Existed_Raid", 00:15:29.198 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:29.198 "strip_size_kb": 64, 00:15:29.198 "state": "configuring", 00:15:29.198 "raid_level": "raid5f", 00:15:29.198 "superblock": true, 00:15:29.198 "num_base_bdevs": 3, 00:15:29.198 "num_base_bdevs_discovered": 2, 00:15:29.198 "num_base_bdevs_operational": 3, 00:15:29.198 "base_bdevs_list": [ 00:15:29.198 { 00:15:29.198 "name": "BaseBdev1", 00:15:29.198 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:29.198 "is_configured": true, 00:15:29.198 "data_offset": 2048, 00:15:29.198 "data_size": 63488 00:15:29.198 }, 00:15:29.198 { 00:15:29.198 "name": null, 00:15:29.198 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:29.198 "is_configured": false, 00:15:29.198 "data_offset": 0, 00:15:29.198 "data_size": 63488 00:15:29.198 }, 00:15:29.198 { 00:15:29.198 "name": "BaseBdev3", 00:15:29.198 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:29.198 "is_configured": true, 00:15:29.198 "data_offset": 2048, 00:15:29.198 "data_size": 63488 00:15:29.198 } 00:15:29.198 ] 00:15:29.198 }' 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.198 16:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 [2024-09-28 16:16:44.185085] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.766 "name": "Existed_Raid", 00:15:29.766 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:29.766 "strip_size_kb": 64, 00:15:29.766 "state": "configuring", 00:15:29.766 "raid_level": "raid5f", 00:15:29.766 "superblock": true, 00:15:29.766 "num_base_bdevs": 3, 00:15:29.766 "num_base_bdevs_discovered": 1, 00:15:29.766 "num_base_bdevs_operational": 3, 00:15:29.766 "base_bdevs_list": [ 00:15:29.766 { 00:15:29.766 "name": null, 00:15:29.766 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:29.766 "is_configured": false, 00:15:29.766 "data_offset": 0, 00:15:29.766 "data_size": 63488 00:15:29.766 }, 00:15:29.766 { 00:15:29.766 "name": null, 00:15:29.766 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:29.766 "is_configured": false, 00:15:29.766 "data_offset": 0, 00:15:29.766 "data_size": 63488 00:15:29.766 }, 00:15:29.766 { 00:15:29.766 "name": "BaseBdev3", 00:15:29.766 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:29.766 "is_configured": true, 00:15:29.766 "data_offset": 2048, 00:15:29.766 "data_size": 63488 00:15:29.766 } 00:15:29.766 ] 00:15:29.766 }' 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.766 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.335 [2024-09-28 16:16:44.779352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.335 "name": "Existed_Raid", 00:15:30.335 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:30.335 "strip_size_kb": 64, 00:15:30.335 "state": "configuring", 00:15:30.335 "raid_level": "raid5f", 00:15:30.335 "superblock": true, 00:15:30.335 "num_base_bdevs": 3, 00:15:30.335 "num_base_bdevs_discovered": 2, 00:15:30.335 "num_base_bdevs_operational": 3, 00:15:30.335 "base_bdevs_list": [ 00:15:30.335 { 00:15:30.335 "name": null, 00:15:30.335 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:30.335 "is_configured": false, 00:15:30.335 "data_offset": 0, 00:15:30.335 "data_size": 63488 00:15:30.335 }, 00:15:30.335 { 00:15:30.335 "name": "BaseBdev2", 00:15:30.335 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:30.335 "is_configured": true, 00:15:30.335 "data_offset": 2048, 00:15:30.335 "data_size": 63488 00:15:30.335 }, 00:15:30.335 { 00:15:30.335 "name": "BaseBdev3", 00:15:30.335 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:30.335 "is_configured": true, 00:15:30.335 "data_offset": 2048, 00:15:30.335 "data_size": 63488 00:15:30.335 } 00:15:30.335 ] 00:15:30.335 }' 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.335 16:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.595 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.855 [2024-09-28 16:16:45.321920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:30.855 [2024-09-28 16:16:45.322175] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:30.855 [2024-09-28 16:16:45.322213] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.855 [2024-09-28 16:16:45.322529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:30.855 NewBaseBdev 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.855 [2024-09-28 16:16:45.327901] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:30.855 [2024-09-28 16:16:45.327965] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:30.855 [2024-09-28 16:16:45.328151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.855 [ 00:15:30.855 { 00:15:30.855 "name": "NewBaseBdev", 00:15:30.855 "aliases": [ 00:15:30.855 "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7" 00:15:30.855 ], 00:15:30.855 "product_name": "Malloc disk", 00:15:30.855 "block_size": 512, 00:15:30.855 "num_blocks": 65536, 00:15:30.855 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:30.855 "assigned_rate_limits": { 00:15:30.855 "rw_ios_per_sec": 0, 00:15:30.855 "rw_mbytes_per_sec": 0, 00:15:30.855 "r_mbytes_per_sec": 0, 00:15:30.855 "w_mbytes_per_sec": 0 00:15:30.855 }, 00:15:30.855 "claimed": true, 00:15:30.855 "claim_type": "exclusive_write", 00:15:30.855 "zoned": false, 00:15:30.855 "supported_io_types": { 00:15:30.855 "read": true, 00:15:30.855 "write": true, 00:15:30.855 "unmap": true, 00:15:30.855 "flush": true, 00:15:30.855 "reset": true, 00:15:30.855 "nvme_admin": false, 00:15:30.855 "nvme_io": false, 00:15:30.855 "nvme_io_md": false, 00:15:30.855 "write_zeroes": true, 00:15:30.855 "zcopy": true, 00:15:30.855 "get_zone_info": false, 00:15:30.855 "zone_management": false, 00:15:30.855 "zone_append": false, 00:15:30.855 "compare": false, 00:15:30.855 "compare_and_write": false, 00:15:30.855 "abort": true, 00:15:30.855 "seek_hole": false, 00:15:30.855 "seek_data": false, 00:15:30.855 "copy": true, 00:15:30.855 "nvme_iov_md": false 00:15:30.855 }, 00:15:30.855 "memory_domains": [ 00:15:30.855 { 00:15:30.855 "dma_device_id": "system", 00:15:30.855 "dma_device_type": 1 00:15:30.855 }, 00:15:30.855 { 00:15:30.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.855 "dma_device_type": 2 00:15:30.855 } 00:15:30.855 ], 00:15:30.855 "driver_specific": {} 00:15:30.855 } 00:15:30.855 ] 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.855 "name": "Existed_Raid", 00:15:30.855 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:30.855 "strip_size_kb": 64, 00:15:30.855 "state": "online", 00:15:30.855 "raid_level": "raid5f", 00:15:30.855 "superblock": true, 00:15:30.855 "num_base_bdevs": 3, 00:15:30.855 "num_base_bdevs_discovered": 3, 00:15:30.855 "num_base_bdevs_operational": 3, 00:15:30.855 "base_bdevs_list": [ 00:15:30.855 { 00:15:30.855 "name": "NewBaseBdev", 00:15:30.855 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:30.855 "is_configured": true, 00:15:30.855 "data_offset": 2048, 00:15:30.855 "data_size": 63488 00:15:30.855 }, 00:15:30.855 { 00:15:30.855 "name": "BaseBdev2", 00:15:30.855 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:30.855 "is_configured": true, 00:15:30.855 "data_offset": 2048, 00:15:30.855 "data_size": 63488 00:15:30.855 }, 00:15:30.855 { 00:15:30.855 "name": "BaseBdev3", 00:15:30.855 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:30.855 "is_configured": true, 00:15:30.855 "data_offset": 2048, 00:15:30.855 "data_size": 63488 00:15:30.855 } 00:15:30.855 ] 00:15:30.855 }' 00:15:30.855 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.856 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.423 [2024-09-28 16:16:45.845403] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.423 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.424 "name": "Existed_Raid", 00:15:31.424 "aliases": [ 00:15:31.424 "832ebf44-edcd-4914-aff9-5ae922be6e13" 00:15:31.424 ], 00:15:31.424 "product_name": "Raid Volume", 00:15:31.424 "block_size": 512, 00:15:31.424 "num_blocks": 126976, 00:15:31.424 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:31.424 "assigned_rate_limits": { 00:15:31.424 "rw_ios_per_sec": 0, 00:15:31.424 "rw_mbytes_per_sec": 0, 00:15:31.424 "r_mbytes_per_sec": 0, 00:15:31.424 "w_mbytes_per_sec": 0 00:15:31.424 }, 00:15:31.424 "claimed": false, 00:15:31.424 "zoned": false, 00:15:31.424 "supported_io_types": { 00:15:31.424 "read": true, 00:15:31.424 "write": true, 00:15:31.424 "unmap": false, 00:15:31.424 "flush": false, 00:15:31.424 "reset": true, 00:15:31.424 "nvme_admin": false, 00:15:31.424 "nvme_io": false, 00:15:31.424 "nvme_io_md": false, 00:15:31.424 "write_zeroes": true, 00:15:31.424 "zcopy": false, 00:15:31.424 "get_zone_info": false, 00:15:31.424 "zone_management": false, 00:15:31.424 "zone_append": false, 00:15:31.424 "compare": false, 00:15:31.424 "compare_and_write": false, 00:15:31.424 "abort": false, 00:15:31.424 "seek_hole": false, 00:15:31.424 "seek_data": false, 00:15:31.424 "copy": false, 00:15:31.424 "nvme_iov_md": false 00:15:31.424 }, 00:15:31.424 "driver_specific": { 00:15:31.424 "raid": { 00:15:31.424 "uuid": "832ebf44-edcd-4914-aff9-5ae922be6e13", 00:15:31.424 "strip_size_kb": 64, 00:15:31.424 "state": "online", 00:15:31.424 "raid_level": "raid5f", 00:15:31.424 "superblock": true, 00:15:31.424 "num_base_bdevs": 3, 00:15:31.424 "num_base_bdevs_discovered": 3, 00:15:31.424 "num_base_bdevs_operational": 3, 00:15:31.424 "base_bdevs_list": [ 00:15:31.424 { 00:15:31.424 "name": "NewBaseBdev", 00:15:31.424 "uuid": "fd882dfa-d6cf-43d2-a6cd-d1a0a198e2d7", 00:15:31.424 "is_configured": true, 00:15:31.424 "data_offset": 2048, 00:15:31.424 "data_size": 63488 00:15:31.424 }, 00:15:31.424 { 00:15:31.424 "name": "BaseBdev2", 00:15:31.424 "uuid": "c7f94813-3079-42f5-a78a-4db0172c300e", 00:15:31.424 "is_configured": true, 00:15:31.424 "data_offset": 2048, 00:15:31.424 "data_size": 63488 00:15:31.424 }, 00:15:31.424 { 00:15:31.424 "name": "BaseBdev3", 00:15:31.424 "uuid": "48b4dcd3-8ab0-457a-a0b3-80e3679d7a14", 00:15:31.424 "is_configured": true, 00:15:31.424 "data_offset": 2048, 00:15:31.424 "data_size": 63488 00:15:31.424 } 00:15:31.424 ] 00:15:31.424 } 00:15:31.424 } 00:15:31.424 }' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:31.424 BaseBdev2 00:15:31.424 BaseBdev3' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.424 16:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.424 [2024-09-28 16:16:46.064814] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.424 [2024-09-28 16:16:46.064838] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.424 [2024-09-28 16:16:46.064898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.424 [2024-09-28 16:16:46.065153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.424 [2024-09-28 16:16:46.065165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80517 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80517 ']' 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80517 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.424 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80517 00:15:31.684 killing process with pid 80517 00:15:31.684 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.684 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.684 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80517' 00:15:31.684 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80517 00:15:31.684 [2024-09-28 16:16:46.114245] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.684 16:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80517 00:15:31.943 [2024-09-28 16:16:46.395508] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.323 16:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:33.323 00:15:33.323 real 0m10.502s 00:15:33.323 user 0m16.562s 00:15:33.323 sys 0m2.008s 00:15:33.323 16:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.323 ************************************ 00:15:33.323 END TEST raid5f_state_function_test_sb 00:15:33.323 ************************************ 00:15:33.323 16:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.323 16:16:47 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:33.323 16:16:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:33.323 16:16:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.323 16:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.323 ************************************ 00:15:33.323 START TEST raid5f_superblock_test 00:15:33.323 ************************************ 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81135 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81135 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81135 ']' 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.323 16:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.323 [2024-09-28 16:16:47.758832] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:33.323 [2024-09-28 16:16:47.758954] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81135 ] 00:15:33.323 [2024-09-28 16:16:47.922341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.583 [2024-09-28 16:16:48.126726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.843 [2024-09-28 16:16:48.320298] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.843 [2024-09-28 16:16:48.320408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.103 malloc1 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.103 [2024-09-28 16:16:48.633519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:34.103 [2024-09-28 16:16:48.633671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.103 [2024-09-28 16:16:48.633712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:34.103 [2024-09-28 16:16:48.633742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.103 [2024-09-28 16:16:48.635672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.103 [2024-09-28 16:16:48.635744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:34.103 pt1 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.103 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.104 malloc2 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.104 [2024-09-28 16:16:48.704218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.104 [2024-09-28 16:16:48.704336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.104 [2024-09-28 16:16:48.704362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:34.104 [2024-09-28 16:16:48.704371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.104 [2024-09-28 16:16:48.706326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.104 [2024-09-28 16:16:48.706362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.104 pt2 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.104 malloc3 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.104 [2024-09-28 16:16:48.754830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:34.104 [2024-09-28 16:16:48.754936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.104 [2024-09-28 16:16:48.754971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:34.104 [2024-09-28 16:16:48.754998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.104 [2024-09-28 16:16:48.756870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.104 [2024-09-28 16:16:48.756939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:34.104 pt3 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.104 [2024-09-28 16:16:48.766881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:34.104 [2024-09-28 16:16:48.768538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.104 [2024-09-28 16:16:48.768636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:34.104 [2024-09-28 16:16:48.768804] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:34.104 [2024-09-28 16:16:48.768875] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.104 [2024-09-28 16:16:48.769089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:34.104 [2024-09-28 16:16:48.773994] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:34.104 [2024-09-28 16:16:48.774048] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:34.104 [2024-09-28 16:16:48.774269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.104 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.364 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.364 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.364 "name": "raid_bdev1", 00:15:34.364 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:34.364 "strip_size_kb": 64, 00:15:34.364 "state": "online", 00:15:34.364 "raid_level": "raid5f", 00:15:34.364 "superblock": true, 00:15:34.364 "num_base_bdevs": 3, 00:15:34.364 "num_base_bdevs_discovered": 3, 00:15:34.364 "num_base_bdevs_operational": 3, 00:15:34.364 "base_bdevs_list": [ 00:15:34.364 { 00:15:34.364 "name": "pt1", 00:15:34.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.364 "is_configured": true, 00:15:34.364 "data_offset": 2048, 00:15:34.364 "data_size": 63488 00:15:34.364 }, 00:15:34.364 { 00:15:34.364 "name": "pt2", 00:15:34.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.364 "is_configured": true, 00:15:34.364 "data_offset": 2048, 00:15:34.364 "data_size": 63488 00:15:34.364 }, 00:15:34.364 { 00:15:34.364 "name": "pt3", 00:15:34.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.364 "is_configured": true, 00:15:34.364 "data_offset": 2048, 00:15:34.364 "data_size": 63488 00:15:34.364 } 00:15:34.364 ] 00:15:34.364 }' 00:15:34.364 16:16:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.364 16:16:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.624 [2024-09-28 16:16:49.231504] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.624 "name": "raid_bdev1", 00:15:34.624 "aliases": [ 00:15:34.624 "5d323496-14b1-4eb8-8d62-d4832b1f5d22" 00:15:34.624 ], 00:15:34.624 "product_name": "Raid Volume", 00:15:34.624 "block_size": 512, 00:15:34.624 "num_blocks": 126976, 00:15:34.624 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:34.624 "assigned_rate_limits": { 00:15:34.624 "rw_ios_per_sec": 0, 00:15:34.624 "rw_mbytes_per_sec": 0, 00:15:34.624 "r_mbytes_per_sec": 0, 00:15:34.624 "w_mbytes_per_sec": 0 00:15:34.624 }, 00:15:34.624 "claimed": false, 00:15:34.624 "zoned": false, 00:15:34.624 "supported_io_types": { 00:15:34.624 "read": true, 00:15:34.624 "write": true, 00:15:34.624 "unmap": false, 00:15:34.624 "flush": false, 00:15:34.624 "reset": true, 00:15:34.624 "nvme_admin": false, 00:15:34.624 "nvme_io": false, 00:15:34.624 "nvme_io_md": false, 00:15:34.624 "write_zeroes": true, 00:15:34.624 "zcopy": false, 00:15:34.624 "get_zone_info": false, 00:15:34.624 "zone_management": false, 00:15:34.624 "zone_append": false, 00:15:34.624 "compare": false, 00:15:34.624 "compare_and_write": false, 00:15:34.624 "abort": false, 00:15:34.624 "seek_hole": false, 00:15:34.624 "seek_data": false, 00:15:34.624 "copy": false, 00:15:34.624 "nvme_iov_md": false 00:15:34.624 }, 00:15:34.624 "driver_specific": { 00:15:34.624 "raid": { 00:15:34.624 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:34.624 "strip_size_kb": 64, 00:15:34.624 "state": "online", 00:15:34.624 "raid_level": "raid5f", 00:15:34.624 "superblock": true, 00:15:34.624 "num_base_bdevs": 3, 00:15:34.624 "num_base_bdevs_discovered": 3, 00:15:34.624 "num_base_bdevs_operational": 3, 00:15:34.624 "base_bdevs_list": [ 00:15:34.624 { 00:15:34.624 "name": "pt1", 00:15:34.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.624 "is_configured": true, 00:15:34.624 "data_offset": 2048, 00:15:34.624 "data_size": 63488 00:15:34.624 }, 00:15:34.624 { 00:15:34.624 "name": "pt2", 00:15:34.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.624 "is_configured": true, 00:15:34.624 "data_offset": 2048, 00:15:34.624 "data_size": 63488 00:15:34.624 }, 00:15:34.624 { 00:15:34.624 "name": "pt3", 00:15:34.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.624 "is_configured": true, 00:15:34.624 "data_offset": 2048, 00:15:34.624 "data_size": 63488 00:15:34.624 } 00:15:34.624 ] 00:15:34.624 } 00:15:34.624 } 00:15:34.624 }' 00:15:34.624 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:34.884 pt2 00:15:34.884 pt3' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:34.884 [2024-09-28 16:16:49.503085] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5d323496-14b1-4eb8-8d62-d4832b1f5d22 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5d323496-14b1-4eb8-8d62-d4832b1f5d22 ']' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.884 [2024-09-28 16:16:49.550843] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.884 [2024-09-28 16:16:49.550868] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.884 [2024-09-28 16:16:49.550925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.884 [2024-09-28 16:16:49.550987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.884 [2024-09-28 16:16:49.550995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.884 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.144 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 [2024-09-28 16:16:49.710592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:35.145 [2024-09-28 16:16:49.712277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:35.145 [2024-09-28 16:16:49.712326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:35.145 [2024-09-28 16:16:49.712369] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:35.145 [2024-09-28 16:16:49.712409] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:35.145 [2024-09-28 16:16:49.712426] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:35.145 [2024-09-28 16:16:49.712441] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.145 [2024-09-28 16:16:49.712451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:35.145 request: 00:15:35.145 { 00:15:35.145 "name": "raid_bdev1", 00:15:35.145 "raid_level": "raid5f", 00:15:35.145 "base_bdevs": [ 00:15:35.145 "malloc1", 00:15:35.145 "malloc2", 00:15:35.145 "malloc3" 00:15:35.145 ], 00:15:35.145 "strip_size_kb": 64, 00:15:35.145 "superblock": false, 00:15:35.145 "method": "bdev_raid_create", 00:15:35.145 "req_id": 1 00:15:35.145 } 00:15:35.145 Got JSON-RPC error response 00:15:35.145 response: 00:15:35.145 { 00:15:35.145 "code": -17, 00:15:35.145 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:35.145 } 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 [2024-09-28 16:16:49.778439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.145 [2024-09-28 16:16:49.778527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.145 [2024-09-28 16:16:49.778559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:35.145 [2024-09-28 16:16:49.778588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.145 [2024-09-28 16:16:49.780556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.145 [2024-09-28 16:16:49.780625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.145 [2024-09-28 16:16:49.780705] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.145 [2024-09-28 16:16:49.780775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.145 pt1 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.405 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.405 "name": "raid_bdev1", 00:15:35.405 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:35.405 "strip_size_kb": 64, 00:15:35.405 "state": "configuring", 00:15:35.405 "raid_level": "raid5f", 00:15:35.405 "superblock": true, 00:15:35.405 "num_base_bdevs": 3, 00:15:35.405 "num_base_bdevs_discovered": 1, 00:15:35.405 "num_base_bdevs_operational": 3, 00:15:35.405 "base_bdevs_list": [ 00:15:35.405 { 00:15:35.405 "name": "pt1", 00:15:35.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.405 "is_configured": true, 00:15:35.405 "data_offset": 2048, 00:15:35.405 "data_size": 63488 00:15:35.405 }, 00:15:35.405 { 00:15:35.405 "name": null, 00:15:35.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.405 "is_configured": false, 00:15:35.405 "data_offset": 2048, 00:15:35.405 "data_size": 63488 00:15:35.405 }, 00:15:35.405 { 00:15:35.405 "name": null, 00:15:35.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.405 "is_configured": false, 00:15:35.405 "data_offset": 2048, 00:15:35.405 "data_size": 63488 00:15:35.405 } 00:15:35.405 ] 00:15:35.405 }' 00:15:35.405 16:16:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.405 16:16:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.664 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.665 [2024-09-28 16:16:50.233731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:35.665 [2024-09-28 16:16:50.233783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.665 [2024-09-28 16:16:50.233802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:35.665 [2024-09-28 16:16:50.233810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.665 [2024-09-28 16:16:50.234127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.665 [2024-09-28 16:16:50.234144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:35.665 [2024-09-28 16:16:50.234200] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:35.665 [2024-09-28 16:16:50.234217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.665 pt2 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.665 [2024-09-28 16:16:50.245736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.665 "name": "raid_bdev1", 00:15:35.665 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:35.665 "strip_size_kb": 64, 00:15:35.665 "state": "configuring", 00:15:35.665 "raid_level": "raid5f", 00:15:35.665 "superblock": true, 00:15:35.665 "num_base_bdevs": 3, 00:15:35.665 "num_base_bdevs_discovered": 1, 00:15:35.665 "num_base_bdevs_operational": 3, 00:15:35.665 "base_bdevs_list": [ 00:15:35.665 { 00:15:35.665 "name": "pt1", 00:15:35.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.665 "is_configured": true, 00:15:35.665 "data_offset": 2048, 00:15:35.665 "data_size": 63488 00:15:35.665 }, 00:15:35.665 { 00:15:35.665 "name": null, 00:15:35.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.665 "is_configured": false, 00:15:35.665 "data_offset": 0, 00:15:35.665 "data_size": 63488 00:15:35.665 }, 00:15:35.665 { 00:15:35.665 "name": null, 00:15:35.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.665 "is_configured": false, 00:15:35.665 "data_offset": 2048, 00:15:35.665 "data_size": 63488 00:15:35.665 } 00:15:35.665 ] 00:15:35.665 }' 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.665 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 [2024-09-28 16:16:50.724866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.235 [2024-09-28 16:16:50.724917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.235 [2024-09-28 16:16:50.724931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:36.235 [2024-09-28 16:16:50.724941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.235 [2024-09-28 16:16:50.725279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.235 [2024-09-28 16:16:50.725300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.235 [2024-09-28 16:16:50.725353] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.235 [2024-09-28 16:16:50.725379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.235 pt2 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 [2024-09-28 16:16:50.736867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.235 [2024-09-28 16:16:50.736962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.235 [2024-09-28 16:16:50.736978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:36.235 [2024-09-28 16:16:50.736988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.235 [2024-09-28 16:16:50.737331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.235 [2024-09-28 16:16:50.737354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.235 [2024-09-28 16:16:50.737408] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:36.235 [2024-09-28 16:16:50.737426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.235 [2024-09-28 16:16:50.737524] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.235 [2024-09-28 16:16:50.737534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.235 [2024-09-28 16:16:50.737752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:36.235 [2024-09-28 16:16:50.742968] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.235 [2024-09-28 16:16:50.742990] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:36.235 [2024-09-28 16:16:50.743174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.235 pt3 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.235 "name": "raid_bdev1", 00:15:36.235 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:36.235 "strip_size_kb": 64, 00:15:36.235 "state": "online", 00:15:36.235 "raid_level": "raid5f", 00:15:36.235 "superblock": true, 00:15:36.235 "num_base_bdevs": 3, 00:15:36.235 "num_base_bdevs_discovered": 3, 00:15:36.235 "num_base_bdevs_operational": 3, 00:15:36.235 "base_bdevs_list": [ 00:15:36.235 { 00:15:36.235 "name": "pt1", 00:15:36.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.235 "is_configured": true, 00:15:36.235 "data_offset": 2048, 00:15:36.235 "data_size": 63488 00:15:36.235 }, 00:15:36.235 { 00:15:36.235 "name": "pt2", 00:15:36.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.235 "is_configured": true, 00:15:36.235 "data_offset": 2048, 00:15:36.235 "data_size": 63488 00:15:36.235 }, 00:15:36.235 { 00:15:36.235 "name": "pt3", 00:15:36.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.235 "is_configured": true, 00:15:36.235 "data_offset": 2048, 00:15:36.235 "data_size": 63488 00:15:36.235 } 00:15:36.235 ] 00:15:36.235 }' 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.235 16:16:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 [2024-09-28 16:16:51.220666] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.805 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.805 "name": "raid_bdev1", 00:15:36.805 "aliases": [ 00:15:36.805 "5d323496-14b1-4eb8-8d62-d4832b1f5d22" 00:15:36.805 ], 00:15:36.805 "product_name": "Raid Volume", 00:15:36.805 "block_size": 512, 00:15:36.805 "num_blocks": 126976, 00:15:36.805 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:36.805 "assigned_rate_limits": { 00:15:36.805 "rw_ios_per_sec": 0, 00:15:36.805 "rw_mbytes_per_sec": 0, 00:15:36.805 "r_mbytes_per_sec": 0, 00:15:36.805 "w_mbytes_per_sec": 0 00:15:36.805 }, 00:15:36.805 "claimed": false, 00:15:36.805 "zoned": false, 00:15:36.805 "supported_io_types": { 00:15:36.805 "read": true, 00:15:36.805 "write": true, 00:15:36.805 "unmap": false, 00:15:36.805 "flush": false, 00:15:36.805 "reset": true, 00:15:36.805 "nvme_admin": false, 00:15:36.805 "nvme_io": false, 00:15:36.805 "nvme_io_md": false, 00:15:36.805 "write_zeroes": true, 00:15:36.805 "zcopy": false, 00:15:36.805 "get_zone_info": false, 00:15:36.805 "zone_management": false, 00:15:36.805 "zone_append": false, 00:15:36.805 "compare": false, 00:15:36.805 "compare_and_write": false, 00:15:36.805 "abort": false, 00:15:36.805 "seek_hole": false, 00:15:36.805 "seek_data": false, 00:15:36.805 "copy": false, 00:15:36.805 "nvme_iov_md": false 00:15:36.805 }, 00:15:36.805 "driver_specific": { 00:15:36.805 "raid": { 00:15:36.805 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:36.805 "strip_size_kb": 64, 00:15:36.805 "state": "online", 00:15:36.805 "raid_level": "raid5f", 00:15:36.805 "superblock": true, 00:15:36.805 "num_base_bdevs": 3, 00:15:36.805 "num_base_bdevs_discovered": 3, 00:15:36.805 "num_base_bdevs_operational": 3, 00:15:36.805 "base_bdevs_list": [ 00:15:36.805 { 00:15:36.805 "name": "pt1", 00:15:36.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.805 "is_configured": true, 00:15:36.805 "data_offset": 2048, 00:15:36.805 "data_size": 63488 00:15:36.805 }, 00:15:36.805 { 00:15:36.805 "name": "pt2", 00:15:36.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.806 "is_configured": true, 00:15:36.806 "data_offset": 2048, 00:15:36.806 "data_size": 63488 00:15:36.806 }, 00:15:36.806 { 00:15:36.806 "name": "pt3", 00:15:36.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.806 "is_configured": true, 00:15:36.806 "data_offset": 2048, 00:15:36.806 "data_size": 63488 00:15:36.806 } 00:15:36.806 ] 00:15:36.806 } 00:15:36.806 } 00:15:36.806 }' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:36.806 pt2 00:15:36.806 pt3' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.806 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:37.066 [2024-09-28 16:16:51.504142] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5d323496-14b1-4eb8-8d62-d4832b1f5d22 '!=' 5d323496-14b1-4eb8-8d62-d4832b1f5d22 ']' 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.066 [2024-09-28 16:16:51.551951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:37.066 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.067 "name": "raid_bdev1", 00:15:37.067 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:37.067 "strip_size_kb": 64, 00:15:37.067 "state": "online", 00:15:37.067 "raid_level": "raid5f", 00:15:37.067 "superblock": true, 00:15:37.067 "num_base_bdevs": 3, 00:15:37.067 "num_base_bdevs_discovered": 2, 00:15:37.067 "num_base_bdevs_operational": 2, 00:15:37.067 "base_bdevs_list": [ 00:15:37.067 { 00:15:37.067 "name": null, 00:15:37.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.067 "is_configured": false, 00:15:37.067 "data_offset": 0, 00:15:37.067 "data_size": 63488 00:15:37.067 }, 00:15:37.067 { 00:15:37.067 "name": "pt2", 00:15:37.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.067 "is_configured": true, 00:15:37.067 "data_offset": 2048, 00:15:37.067 "data_size": 63488 00:15:37.067 }, 00:15:37.067 { 00:15:37.067 "name": "pt3", 00:15:37.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.067 "is_configured": true, 00:15:37.067 "data_offset": 2048, 00:15:37.067 "data_size": 63488 00:15:37.067 } 00:15:37.067 ] 00:15:37.067 }' 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.067 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.327 16:16:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.327 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.327 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.327 [2024-09-28 16:16:51.995227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.327 [2024-09-28 16:16:51.995308] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.327 [2024-09-28 16:16:51.995378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.327 [2024-09-28 16:16:51.995439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.327 [2024-09-28 16:16:51.995488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:37.327 16:16:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.327 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:37.327 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.327 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.327 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.587 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.587 [2024-09-28 16:16:52.063195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.588 [2024-09-28 16:16:52.063304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.588 [2024-09-28 16:16:52.063322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:37.588 [2024-09-28 16:16:52.063332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.588 [2024-09-28 16:16:52.065307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.588 [2024-09-28 16:16:52.065345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.588 [2024-09-28 16:16:52.065401] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:37.588 [2024-09-28 16:16:52.065446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.588 pt2 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.588 "name": "raid_bdev1", 00:15:37.588 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:37.588 "strip_size_kb": 64, 00:15:37.588 "state": "configuring", 00:15:37.588 "raid_level": "raid5f", 00:15:37.588 "superblock": true, 00:15:37.588 "num_base_bdevs": 3, 00:15:37.588 "num_base_bdevs_discovered": 1, 00:15:37.588 "num_base_bdevs_operational": 2, 00:15:37.588 "base_bdevs_list": [ 00:15:37.588 { 00:15:37.588 "name": null, 00:15:37.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.588 "is_configured": false, 00:15:37.588 "data_offset": 2048, 00:15:37.588 "data_size": 63488 00:15:37.588 }, 00:15:37.588 { 00:15:37.588 "name": "pt2", 00:15:37.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.588 "is_configured": true, 00:15:37.588 "data_offset": 2048, 00:15:37.588 "data_size": 63488 00:15:37.588 }, 00:15:37.588 { 00:15:37.588 "name": null, 00:15:37.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.588 "is_configured": false, 00:15:37.588 "data_offset": 2048, 00:15:37.588 "data_size": 63488 00:15:37.588 } 00:15:37.588 ] 00:15:37.588 }' 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.588 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.848 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:37.848 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:37.848 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:37.848 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:37.848 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.848 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.108 [2024-09-28 16:16:52.538345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.108 [2024-09-28 16:16:52.538439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.108 [2024-09-28 16:16:52.538471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:38.108 [2024-09-28 16:16:52.538502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.108 [2024-09-28 16:16:52.538864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.108 [2024-09-28 16:16:52.538920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.108 [2024-09-28 16:16:52.538997] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:38.108 [2024-09-28 16:16:52.539055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.108 [2024-09-28 16:16:52.539232] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:38.108 [2024-09-28 16:16:52.539286] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.108 [2024-09-28 16:16:52.539511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:38.108 [2024-09-28 16:16:52.544875] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:38.108 [2024-09-28 16:16:52.544927] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:38.108 [2024-09-28 16:16:52.545207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.108 pt3 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.108 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.108 "name": "raid_bdev1", 00:15:38.108 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:38.108 "strip_size_kb": 64, 00:15:38.108 "state": "online", 00:15:38.108 "raid_level": "raid5f", 00:15:38.108 "superblock": true, 00:15:38.108 "num_base_bdevs": 3, 00:15:38.108 "num_base_bdevs_discovered": 2, 00:15:38.108 "num_base_bdevs_operational": 2, 00:15:38.108 "base_bdevs_list": [ 00:15:38.108 { 00:15:38.108 "name": null, 00:15:38.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.109 "is_configured": false, 00:15:38.109 "data_offset": 2048, 00:15:38.109 "data_size": 63488 00:15:38.109 }, 00:15:38.109 { 00:15:38.109 "name": "pt2", 00:15:38.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.109 "is_configured": true, 00:15:38.109 "data_offset": 2048, 00:15:38.109 "data_size": 63488 00:15:38.109 }, 00:15:38.109 { 00:15:38.109 "name": "pt3", 00:15:38.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.109 "is_configured": true, 00:15:38.109 "data_offset": 2048, 00:15:38.109 "data_size": 63488 00:15:38.109 } 00:15:38.109 ] 00:15:38.109 }' 00:15:38.109 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.109 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.376 [2024-09-28 16:16:52.947000] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.376 [2024-09-28 16:16:52.947074] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.376 [2024-09-28 16:16:52.947156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.376 [2024-09-28 16:16:52.947207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.376 [2024-09-28 16:16:52.947216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:38.376 16:16:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.376 [2024-09-28 16:16:53.022891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.376 [2024-09-28 16:16:53.022939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.376 [2024-09-28 16:16:53.022955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:38.376 [2024-09-28 16:16:53.022963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.376 [2024-09-28 16:16:53.025021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.376 [2024-09-28 16:16:53.025058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.376 [2024-09-28 16:16:53.025116] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:38.376 [2024-09-28 16:16:53.025162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.376 [2024-09-28 16:16:53.025282] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:38.376 [2024-09-28 16:16:53.025294] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.376 [2024-09-28 16:16:53.025309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:38.376 [2024-09-28 16:16:53.025390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.376 pt1 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.376 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.715 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.715 "name": "raid_bdev1", 00:15:38.715 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:38.715 "strip_size_kb": 64, 00:15:38.715 "state": "configuring", 00:15:38.715 "raid_level": "raid5f", 00:15:38.715 "superblock": true, 00:15:38.715 "num_base_bdevs": 3, 00:15:38.715 "num_base_bdevs_discovered": 1, 00:15:38.715 "num_base_bdevs_operational": 2, 00:15:38.715 "base_bdevs_list": [ 00:15:38.715 { 00:15:38.715 "name": null, 00:15:38.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.715 "is_configured": false, 00:15:38.715 "data_offset": 2048, 00:15:38.715 "data_size": 63488 00:15:38.715 }, 00:15:38.715 { 00:15:38.715 "name": "pt2", 00:15:38.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.715 "is_configured": true, 00:15:38.715 "data_offset": 2048, 00:15:38.715 "data_size": 63488 00:15:38.715 }, 00:15:38.715 { 00:15:38.715 "name": null, 00:15:38.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.715 "is_configured": false, 00:15:38.715 "data_offset": 2048, 00:15:38.715 "data_size": 63488 00:15:38.715 } 00:15:38.715 ] 00:15:38.715 }' 00:15:38.715 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.715 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.976 [2024-09-28 16:16:53.526038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.976 [2024-09-28 16:16:53.526136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.976 [2024-09-28 16:16:53.526168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:38.976 [2024-09-28 16:16:53.526229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.976 [2024-09-28 16:16:53.526622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.976 [2024-09-28 16:16:53.526679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.976 [2024-09-28 16:16:53.526767] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:38.976 [2024-09-28 16:16:53.526814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.976 [2024-09-28 16:16:53.526947] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:38.976 [2024-09-28 16:16:53.526982] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.976 [2024-09-28 16:16:53.527283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:38.976 [2024-09-28 16:16:53.532741] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:38.976 [2024-09-28 16:16:53.532799] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:38.976 [2024-09-28 16:16:53.533043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.976 pt3 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.976 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.977 "name": "raid_bdev1", 00:15:38.977 "uuid": "5d323496-14b1-4eb8-8d62-d4832b1f5d22", 00:15:38.977 "strip_size_kb": 64, 00:15:38.977 "state": "online", 00:15:38.977 "raid_level": "raid5f", 00:15:38.977 "superblock": true, 00:15:38.977 "num_base_bdevs": 3, 00:15:38.977 "num_base_bdevs_discovered": 2, 00:15:38.977 "num_base_bdevs_operational": 2, 00:15:38.977 "base_bdevs_list": [ 00:15:38.977 { 00:15:38.977 "name": null, 00:15:38.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.977 "is_configured": false, 00:15:38.977 "data_offset": 2048, 00:15:38.977 "data_size": 63488 00:15:38.977 }, 00:15:38.977 { 00:15:38.977 "name": "pt2", 00:15:38.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.977 "is_configured": true, 00:15:38.977 "data_offset": 2048, 00:15:38.977 "data_size": 63488 00:15:38.977 }, 00:15:38.977 { 00:15:38.977 "name": "pt3", 00:15:38.977 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.977 "is_configured": true, 00:15:38.977 "data_offset": 2048, 00:15:38.977 "data_size": 63488 00:15:38.977 } 00:15:38.977 ] 00:15:38.977 }' 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.977 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.547 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:39.547 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.547 16:16:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:39.547 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.547 16:16:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:39.547 [2024-09-28 16:16:54.030409] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5d323496-14b1-4eb8-8d62-d4832b1f5d22 '!=' 5d323496-14b1-4eb8-8d62-d4832b1f5d22 ']' 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81135 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81135 ']' 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81135 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81135 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.547 killing process with pid 81135 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81135' 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81135 00:15:39.547 [2024-09-28 16:16:54.116401] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.547 [2024-09-28 16:16:54.116469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.547 [2024-09-28 16:16:54.116519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.547 [2024-09-28 16:16:54.116530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:39.547 16:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81135 00:15:39.807 [2024-09-28 16:16:54.399210] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.189 16:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:41.189 00:15:41.189 real 0m7.926s 00:15:41.189 user 0m12.317s 00:15:41.189 sys 0m1.523s 00:15:41.189 16:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.189 ************************************ 00:15:41.189 END TEST raid5f_superblock_test 00:15:41.189 ************************************ 00:15:41.189 16:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.189 16:16:55 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:41.189 16:16:55 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:41.189 16:16:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:41.189 16:16:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.189 16:16:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.189 ************************************ 00:15:41.189 START TEST raid5f_rebuild_test 00:15:41.189 ************************************ 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81576 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81576 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81576 ']' 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.189 16:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.190 16:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.190 16:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.190 16:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.190 [2024-09-28 16:16:55.781797] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:41.190 [2024-09-28 16:16:55.782030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:41.190 Zero copy mechanism will not be used. 00:15:41.190 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81576 ] 00:15:41.448 [2024-09-28 16:16:55.950904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.707 [2024-09-28 16:16:56.148690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.707 [2024-09-28 16:16:56.345665] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.707 [2024-09-28 16:16:56.345780] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.968 BaseBdev1_malloc 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.968 [2024-09-28 16:16:56.618466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.968 [2024-09-28 16:16:56.618542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.968 [2024-09-28 16:16:56.618566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.968 [2024-09-28 16:16:56.618579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.968 [2024-09-28 16:16:56.620558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.968 [2024-09-28 16:16:56.620676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.968 BaseBdev1 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.968 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 BaseBdev2_malloc 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 [2024-09-28 16:16:56.683469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:42.228 [2024-09-28 16:16:56.683528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.228 [2024-09-28 16:16:56.683548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:42.228 [2024-09-28 16:16:56.683557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.228 [2024-09-28 16:16:56.685471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.228 [2024-09-28 16:16:56.685509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:42.228 BaseBdev2 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 BaseBdev3_malloc 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 [2024-09-28 16:16:56.735564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:42.228 [2024-09-28 16:16:56.735615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.228 [2024-09-28 16:16:56.735635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:42.228 [2024-09-28 16:16:56.735645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.228 [2024-09-28 16:16:56.737503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.228 [2024-09-28 16:16:56.737542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:42.228 BaseBdev3 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 spare_malloc 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 spare_delay 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 [2024-09-28 16:16:56.800783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:42.228 [2024-09-28 16:16:56.800836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.228 [2024-09-28 16:16:56.800852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:42.228 [2024-09-28 16:16:56.800863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.228 [2024-09-28 16:16:56.802813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.228 [2024-09-28 16:16:56.802934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:42.228 spare 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 [2024-09-28 16:16:56.812821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.228 [2024-09-28 16:16:56.814456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.228 [2024-09-28 16:16:56.814515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.228 [2024-09-28 16:16:56.814592] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:42.228 [2024-09-28 16:16:56.814600] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:42.228 [2024-09-28 16:16:56.814825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:42.228 [2024-09-28 16:16:56.820153] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:42.228 [2024-09-28 16:16:56.820178] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:42.228 [2024-09-28 16:16:56.820359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.229 "name": "raid_bdev1", 00:15:42.229 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:42.229 "strip_size_kb": 64, 00:15:42.229 "state": "online", 00:15:42.229 "raid_level": "raid5f", 00:15:42.229 "superblock": false, 00:15:42.229 "num_base_bdevs": 3, 00:15:42.229 "num_base_bdevs_discovered": 3, 00:15:42.229 "num_base_bdevs_operational": 3, 00:15:42.229 "base_bdevs_list": [ 00:15:42.229 { 00:15:42.229 "name": "BaseBdev1", 00:15:42.229 "uuid": "f5f7d59b-a54b-5a55-ba12-cd1e574b5a97", 00:15:42.229 "is_configured": true, 00:15:42.229 "data_offset": 0, 00:15:42.229 "data_size": 65536 00:15:42.229 }, 00:15:42.229 { 00:15:42.229 "name": "BaseBdev2", 00:15:42.229 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:42.229 "is_configured": true, 00:15:42.229 "data_offset": 0, 00:15:42.229 "data_size": 65536 00:15:42.229 }, 00:15:42.229 { 00:15:42.229 "name": "BaseBdev3", 00:15:42.229 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:42.229 "is_configured": true, 00:15:42.229 "data_offset": 0, 00:15:42.229 "data_size": 65536 00:15:42.229 } 00:15:42.229 ] 00:15:42.229 }' 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.229 16:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:42.799 [2024-09-28 16:16:57.297632] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.799 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:43.060 [2024-09-28 16:16:57.549138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:43.060 /dev/nbd0 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.060 1+0 records in 00:15:43.060 1+0 records out 00:15:43.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425722 s, 9.6 MB/s 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:43.060 16:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:43.628 512+0 records in 00:15:43.628 512+0 records out 00:15:43.628 67108864 bytes (67 MB, 64 MiB) copied, 0.477475 s, 141 MB/s 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.628 [2024-09-28 16:16:58.308409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.628 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.888 [2024-09-28 16:16:58.323360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.888 "name": "raid_bdev1", 00:15:43.888 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:43.888 "strip_size_kb": 64, 00:15:43.888 "state": "online", 00:15:43.888 "raid_level": "raid5f", 00:15:43.888 "superblock": false, 00:15:43.888 "num_base_bdevs": 3, 00:15:43.888 "num_base_bdevs_discovered": 2, 00:15:43.888 "num_base_bdevs_operational": 2, 00:15:43.888 "base_bdevs_list": [ 00:15:43.888 { 00:15:43.888 "name": null, 00:15:43.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.888 "is_configured": false, 00:15:43.888 "data_offset": 0, 00:15:43.888 "data_size": 65536 00:15:43.888 }, 00:15:43.888 { 00:15:43.888 "name": "BaseBdev2", 00:15:43.888 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:43.888 "is_configured": true, 00:15:43.888 "data_offset": 0, 00:15:43.888 "data_size": 65536 00:15:43.888 }, 00:15:43.888 { 00:15:43.888 "name": "BaseBdev3", 00:15:43.888 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:43.888 "is_configured": true, 00:15:43.888 "data_offset": 0, 00:15:43.888 "data_size": 65536 00:15:43.888 } 00:15:43.888 ] 00:15:43.888 }' 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.888 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.148 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.148 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.148 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.148 [2024-09-28 16:16:58.818524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.407 [2024-09-28 16:16:58.833148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:44.407 16:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.407 16:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:44.407 [2024-09-28 16:16:58.840421] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.347 "name": "raid_bdev1", 00:15:45.347 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:45.347 "strip_size_kb": 64, 00:15:45.347 "state": "online", 00:15:45.347 "raid_level": "raid5f", 00:15:45.347 "superblock": false, 00:15:45.347 "num_base_bdevs": 3, 00:15:45.347 "num_base_bdevs_discovered": 3, 00:15:45.347 "num_base_bdevs_operational": 3, 00:15:45.347 "process": { 00:15:45.347 "type": "rebuild", 00:15:45.347 "target": "spare", 00:15:45.347 "progress": { 00:15:45.347 "blocks": 20480, 00:15:45.347 "percent": 15 00:15:45.347 } 00:15:45.347 }, 00:15:45.347 "base_bdevs_list": [ 00:15:45.347 { 00:15:45.347 "name": "spare", 00:15:45.347 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:45.347 "is_configured": true, 00:15:45.347 "data_offset": 0, 00:15:45.347 "data_size": 65536 00:15:45.347 }, 00:15:45.347 { 00:15:45.347 "name": "BaseBdev2", 00:15:45.347 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:45.347 "is_configured": true, 00:15:45.347 "data_offset": 0, 00:15:45.347 "data_size": 65536 00:15:45.347 }, 00:15:45.347 { 00:15:45.347 "name": "BaseBdev3", 00:15:45.347 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:45.347 "is_configured": true, 00:15:45.347 "data_offset": 0, 00:15:45.347 "data_size": 65536 00:15:45.347 } 00:15:45.347 ] 00:15:45.347 }' 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.347 16:16:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.347 [2024-09-28 16:16:59.991435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.606 [2024-09-28 16:17:00.047561] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.606 [2024-09-28 16:17:00.047672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.606 [2024-09-28 16:17:00.047710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.606 [2024-09-28 16:17:00.047733] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.606 "name": "raid_bdev1", 00:15:45.606 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:45.606 "strip_size_kb": 64, 00:15:45.606 "state": "online", 00:15:45.606 "raid_level": "raid5f", 00:15:45.606 "superblock": false, 00:15:45.606 "num_base_bdevs": 3, 00:15:45.606 "num_base_bdevs_discovered": 2, 00:15:45.606 "num_base_bdevs_operational": 2, 00:15:45.606 "base_bdevs_list": [ 00:15:45.606 { 00:15:45.606 "name": null, 00:15:45.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.606 "is_configured": false, 00:15:45.606 "data_offset": 0, 00:15:45.606 "data_size": 65536 00:15:45.606 }, 00:15:45.606 { 00:15:45.606 "name": "BaseBdev2", 00:15:45.606 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:45.606 "is_configured": true, 00:15:45.606 "data_offset": 0, 00:15:45.606 "data_size": 65536 00:15:45.606 }, 00:15:45.606 { 00:15:45.606 "name": "BaseBdev3", 00:15:45.606 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:45.606 "is_configured": true, 00:15:45.606 "data_offset": 0, 00:15:45.606 "data_size": 65536 00:15:45.606 } 00:15:45.606 ] 00:15:45.606 }' 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.606 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.866 "name": "raid_bdev1", 00:15:45.866 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:45.866 "strip_size_kb": 64, 00:15:45.866 "state": "online", 00:15:45.866 "raid_level": "raid5f", 00:15:45.866 "superblock": false, 00:15:45.866 "num_base_bdevs": 3, 00:15:45.866 "num_base_bdevs_discovered": 2, 00:15:45.866 "num_base_bdevs_operational": 2, 00:15:45.866 "base_bdevs_list": [ 00:15:45.866 { 00:15:45.866 "name": null, 00:15:45.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.866 "is_configured": false, 00:15:45.866 "data_offset": 0, 00:15:45.866 "data_size": 65536 00:15:45.866 }, 00:15:45.866 { 00:15:45.866 "name": "BaseBdev2", 00:15:45.866 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:45.866 "is_configured": true, 00:15:45.866 "data_offset": 0, 00:15:45.866 "data_size": 65536 00:15:45.866 }, 00:15:45.866 { 00:15:45.866 "name": "BaseBdev3", 00:15:45.866 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:45.866 "is_configured": true, 00:15:45.866 "data_offset": 0, 00:15:45.866 "data_size": 65536 00:15:45.866 } 00:15:45.866 ] 00:15:45.866 }' 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.866 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.125 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.125 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.125 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.125 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.125 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.125 [2024-09-28 16:17:00.601865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.125 [2024-09-28 16:17:00.615728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:46.125 16:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.125 16:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:46.125 [2024-09-28 16:17:00.623060] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.063 "name": "raid_bdev1", 00:15:47.063 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:47.063 "strip_size_kb": 64, 00:15:47.063 "state": "online", 00:15:47.063 "raid_level": "raid5f", 00:15:47.063 "superblock": false, 00:15:47.063 "num_base_bdevs": 3, 00:15:47.063 "num_base_bdevs_discovered": 3, 00:15:47.063 "num_base_bdevs_operational": 3, 00:15:47.063 "process": { 00:15:47.063 "type": "rebuild", 00:15:47.063 "target": "spare", 00:15:47.063 "progress": { 00:15:47.063 "blocks": 20480, 00:15:47.063 "percent": 15 00:15:47.063 } 00:15:47.063 }, 00:15:47.063 "base_bdevs_list": [ 00:15:47.063 { 00:15:47.063 "name": "spare", 00:15:47.063 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:47.063 "is_configured": true, 00:15:47.063 "data_offset": 0, 00:15:47.063 "data_size": 65536 00:15:47.063 }, 00:15:47.063 { 00:15:47.063 "name": "BaseBdev2", 00:15:47.063 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:47.063 "is_configured": true, 00:15:47.063 "data_offset": 0, 00:15:47.063 "data_size": 65536 00:15:47.063 }, 00:15:47.063 { 00:15:47.063 "name": "BaseBdev3", 00:15:47.063 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:47.063 "is_configured": true, 00:15:47.063 "data_offset": 0, 00:15:47.063 "data_size": 65536 00:15:47.063 } 00:15:47.063 ] 00:15:47.063 }' 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.063 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.323 "name": "raid_bdev1", 00:15:47.323 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:47.323 "strip_size_kb": 64, 00:15:47.323 "state": "online", 00:15:47.323 "raid_level": "raid5f", 00:15:47.323 "superblock": false, 00:15:47.323 "num_base_bdevs": 3, 00:15:47.323 "num_base_bdevs_discovered": 3, 00:15:47.323 "num_base_bdevs_operational": 3, 00:15:47.323 "process": { 00:15:47.323 "type": "rebuild", 00:15:47.323 "target": "spare", 00:15:47.323 "progress": { 00:15:47.323 "blocks": 22528, 00:15:47.323 "percent": 17 00:15:47.323 } 00:15:47.323 }, 00:15:47.323 "base_bdevs_list": [ 00:15:47.323 { 00:15:47.323 "name": "spare", 00:15:47.323 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:47.323 "is_configured": true, 00:15:47.323 "data_offset": 0, 00:15:47.323 "data_size": 65536 00:15:47.323 }, 00:15:47.323 { 00:15:47.323 "name": "BaseBdev2", 00:15:47.323 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:47.323 "is_configured": true, 00:15:47.323 "data_offset": 0, 00:15:47.323 "data_size": 65536 00:15:47.323 }, 00:15:47.323 { 00:15:47.323 "name": "BaseBdev3", 00:15:47.323 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:47.323 "is_configured": true, 00:15:47.323 "data_offset": 0, 00:15:47.323 "data_size": 65536 00:15:47.323 } 00:15:47.323 ] 00:15:47.323 }' 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.323 16:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.262 16:17:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.521 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.521 "name": "raid_bdev1", 00:15:48.521 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:48.521 "strip_size_kb": 64, 00:15:48.521 "state": "online", 00:15:48.521 "raid_level": "raid5f", 00:15:48.521 "superblock": false, 00:15:48.521 "num_base_bdevs": 3, 00:15:48.521 "num_base_bdevs_discovered": 3, 00:15:48.521 "num_base_bdevs_operational": 3, 00:15:48.521 "process": { 00:15:48.521 "type": "rebuild", 00:15:48.521 "target": "spare", 00:15:48.521 "progress": { 00:15:48.521 "blocks": 45056, 00:15:48.521 "percent": 34 00:15:48.521 } 00:15:48.521 }, 00:15:48.521 "base_bdevs_list": [ 00:15:48.521 { 00:15:48.522 "name": "spare", 00:15:48.522 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:48.522 "is_configured": true, 00:15:48.522 "data_offset": 0, 00:15:48.522 "data_size": 65536 00:15:48.522 }, 00:15:48.522 { 00:15:48.522 "name": "BaseBdev2", 00:15:48.522 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:48.522 "is_configured": true, 00:15:48.522 "data_offset": 0, 00:15:48.522 "data_size": 65536 00:15:48.522 }, 00:15:48.522 { 00:15:48.522 "name": "BaseBdev3", 00:15:48.522 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:48.522 "is_configured": true, 00:15:48.522 "data_offset": 0, 00:15:48.522 "data_size": 65536 00:15:48.522 } 00:15:48.522 ] 00:15:48.522 }' 00:15:48.522 16:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.522 16:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.522 16:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.522 16:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.522 16:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.461 "name": "raid_bdev1", 00:15:49.461 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:49.461 "strip_size_kb": 64, 00:15:49.461 "state": "online", 00:15:49.461 "raid_level": "raid5f", 00:15:49.461 "superblock": false, 00:15:49.461 "num_base_bdevs": 3, 00:15:49.461 "num_base_bdevs_discovered": 3, 00:15:49.461 "num_base_bdevs_operational": 3, 00:15:49.461 "process": { 00:15:49.461 "type": "rebuild", 00:15:49.461 "target": "spare", 00:15:49.461 "progress": { 00:15:49.461 "blocks": 69632, 00:15:49.461 "percent": 53 00:15:49.461 } 00:15:49.461 }, 00:15:49.461 "base_bdevs_list": [ 00:15:49.461 { 00:15:49.461 "name": "spare", 00:15:49.461 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:49.461 "is_configured": true, 00:15:49.461 "data_offset": 0, 00:15:49.461 "data_size": 65536 00:15:49.461 }, 00:15:49.461 { 00:15:49.461 "name": "BaseBdev2", 00:15:49.461 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:49.461 "is_configured": true, 00:15:49.461 "data_offset": 0, 00:15:49.461 "data_size": 65536 00:15:49.461 }, 00:15:49.461 { 00:15:49.461 "name": "BaseBdev3", 00:15:49.461 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:49.461 "is_configured": true, 00:15:49.461 "data_offset": 0, 00:15:49.461 "data_size": 65536 00:15:49.461 } 00:15:49.461 ] 00:15:49.461 }' 00:15:49.461 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.721 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.721 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.721 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.721 16:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.660 "name": "raid_bdev1", 00:15:50.660 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:50.660 "strip_size_kb": 64, 00:15:50.660 "state": "online", 00:15:50.660 "raid_level": "raid5f", 00:15:50.660 "superblock": false, 00:15:50.660 "num_base_bdevs": 3, 00:15:50.660 "num_base_bdevs_discovered": 3, 00:15:50.660 "num_base_bdevs_operational": 3, 00:15:50.660 "process": { 00:15:50.660 "type": "rebuild", 00:15:50.660 "target": "spare", 00:15:50.660 "progress": { 00:15:50.660 "blocks": 92160, 00:15:50.660 "percent": 70 00:15:50.660 } 00:15:50.660 }, 00:15:50.660 "base_bdevs_list": [ 00:15:50.660 { 00:15:50.660 "name": "spare", 00:15:50.660 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:50.660 "is_configured": true, 00:15:50.660 "data_offset": 0, 00:15:50.660 "data_size": 65536 00:15:50.660 }, 00:15:50.660 { 00:15:50.660 "name": "BaseBdev2", 00:15:50.660 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:50.660 "is_configured": true, 00:15:50.660 "data_offset": 0, 00:15:50.660 "data_size": 65536 00:15:50.660 }, 00:15:50.660 { 00:15:50.660 "name": "BaseBdev3", 00:15:50.660 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:50.660 "is_configured": true, 00:15:50.660 "data_offset": 0, 00:15:50.660 "data_size": 65536 00:15:50.660 } 00:15:50.660 ] 00:15:50.660 }' 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.660 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.920 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.920 16:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.858 "name": "raid_bdev1", 00:15:51.858 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:51.858 "strip_size_kb": 64, 00:15:51.858 "state": "online", 00:15:51.858 "raid_level": "raid5f", 00:15:51.858 "superblock": false, 00:15:51.858 "num_base_bdevs": 3, 00:15:51.858 "num_base_bdevs_discovered": 3, 00:15:51.858 "num_base_bdevs_operational": 3, 00:15:51.858 "process": { 00:15:51.858 "type": "rebuild", 00:15:51.858 "target": "spare", 00:15:51.858 "progress": { 00:15:51.858 "blocks": 116736, 00:15:51.858 "percent": 89 00:15:51.858 } 00:15:51.858 }, 00:15:51.858 "base_bdevs_list": [ 00:15:51.858 { 00:15:51.858 "name": "spare", 00:15:51.858 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:51.858 "is_configured": true, 00:15:51.858 "data_offset": 0, 00:15:51.858 "data_size": 65536 00:15:51.858 }, 00:15:51.858 { 00:15:51.858 "name": "BaseBdev2", 00:15:51.858 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:51.858 "is_configured": true, 00:15:51.858 "data_offset": 0, 00:15:51.858 "data_size": 65536 00:15:51.858 }, 00:15:51.858 { 00:15:51.858 "name": "BaseBdev3", 00:15:51.858 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:51.858 "is_configured": true, 00:15:51.858 "data_offset": 0, 00:15:51.858 "data_size": 65536 00:15:51.858 } 00:15:51.858 ] 00:15:51.858 }' 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.858 16:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.428 [2024-09-28 16:17:07.058018] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:52.428 [2024-09-28 16:17:07.058089] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:52.428 [2024-09-28 16:17:07.058128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.997 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.997 "name": "raid_bdev1", 00:15:52.997 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:52.997 "strip_size_kb": 64, 00:15:52.997 "state": "online", 00:15:52.997 "raid_level": "raid5f", 00:15:52.997 "superblock": false, 00:15:52.997 "num_base_bdevs": 3, 00:15:52.997 "num_base_bdevs_discovered": 3, 00:15:52.997 "num_base_bdevs_operational": 3, 00:15:52.998 "base_bdevs_list": [ 00:15:52.998 { 00:15:52.998 "name": "spare", 00:15:52.998 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:52.998 "is_configured": true, 00:15:52.998 "data_offset": 0, 00:15:52.998 "data_size": 65536 00:15:52.998 }, 00:15:52.998 { 00:15:52.998 "name": "BaseBdev2", 00:15:52.998 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:52.998 "is_configured": true, 00:15:52.998 "data_offset": 0, 00:15:52.998 "data_size": 65536 00:15:52.998 }, 00:15:52.998 { 00:15:52.998 "name": "BaseBdev3", 00:15:52.998 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:52.998 "is_configured": true, 00:15:52.998 "data_offset": 0, 00:15:52.998 "data_size": 65536 00:15:52.998 } 00:15:52.998 ] 00:15:52.998 }' 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.998 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.265 "name": "raid_bdev1", 00:15:53.265 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:53.265 "strip_size_kb": 64, 00:15:53.265 "state": "online", 00:15:53.265 "raid_level": "raid5f", 00:15:53.265 "superblock": false, 00:15:53.265 "num_base_bdevs": 3, 00:15:53.265 "num_base_bdevs_discovered": 3, 00:15:53.265 "num_base_bdevs_operational": 3, 00:15:53.265 "base_bdevs_list": [ 00:15:53.265 { 00:15:53.265 "name": "spare", 00:15:53.265 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:53.265 "is_configured": true, 00:15:53.265 "data_offset": 0, 00:15:53.265 "data_size": 65536 00:15:53.265 }, 00:15:53.265 { 00:15:53.265 "name": "BaseBdev2", 00:15:53.265 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:53.265 "is_configured": true, 00:15:53.265 "data_offset": 0, 00:15:53.265 "data_size": 65536 00:15:53.265 }, 00:15:53.265 { 00:15:53.265 "name": "BaseBdev3", 00:15:53.265 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:53.265 "is_configured": true, 00:15:53.265 "data_offset": 0, 00:15:53.265 "data_size": 65536 00:15:53.265 } 00:15:53.265 ] 00:15:53.265 }' 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.265 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.265 "name": "raid_bdev1", 00:15:53.265 "uuid": "f35ea7f7-3b45-45a1-961f-72b27292a31e", 00:15:53.265 "strip_size_kb": 64, 00:15:53.265 "state": "online", 00:15:53.265 "raid_level": "raid5f", 00:15:53.265 "superblock": false, 00:15:53.265 "num_base_bdevs": 3, 00:15:53.265 "num_base_bdevs_discovered": 3, 00:15:53.265 "num_base_bdevs_operational": 3, 00:15:53.265 "base_bdevs_list": [ 00:15:53.265 { 00:15:53.265 "name": "spare", 00:15:53.265 "uuid": "a30cdb96-c1ea-5522-b1e3-c9940cd76088", 00:15:53.265 "is_configured": true, 00:15:53.265 "data_offset": 0, 00:15:53.265 "data_size": 65536 00:15:53.265 }, 00:15:53.265 { 00:15:53.265 "name": "BaseBdev2", 00:15:53.266 "uuid": "a40f56b4-6183-5467-951a-b9ab50e04a5a", 00:15:53.266 "is_configured": true, 00:15:53.266 "data_offset": 0, 00:15:53.266 "data_size": 65536 00:15:53.266 }, 00:15:53.266 { 00:15:53.266 "name": "BaseBdev3", 00:15:53.266 "uuid": "7b37c986-dc8f-5e94-9d32-294e750ef60d", 00:15:53.266 "is_configured": true, 00:15:53.266 "data_offset": 0, 00:15:53.266 "data_size": 65536 00:15:53.266 } 00:15:53.266 ] 00:15:53.266 }' 00:15:53.266 16:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.266 16:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 [2024-09-28 16:17:08.270853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.832 [2024-09-28 16:17:08.270942] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.832 [2024-09-28 16:17:08.271021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.832 [2024-09-28 16:17:08.271099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.832 [2024-09-28 16:17:08.271126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.832 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:53.833 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:54.091 /dev/nbd0 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.091 1+0 records in 00:15:54.091 1+0 records out 00:15:54.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358159 s, 11.4 MB/s 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:54.091 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:54.350 /dev/nbd1 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.350 1+0 records in 00:15:54.350 1+0 records out 00:15:54.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434872 s, 9.4 MB/s 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:54.350 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.351 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:54.351 16:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:54.351 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:54.351 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.351 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:54.351 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.351 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:54.351 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.351 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.609 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81576 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81576 ']' 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81576 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81576 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.868 killing process with pid 81576 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81576' 00:15:54.868 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81576 00:15:54.868 Received shutdown signal, test time was about 60.000000 seconds 00:15:54.868 00:15:54.868 Latency(us) 00:15:54.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.868 =================================================================================================================== 00:15:54.868 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:54.868 [2024-09-28 16:17:09.518072] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.869 16:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81576 00:15:55.436 [2024-09-28 16:17:09.885823] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.375 16:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:56.375 00:15:56.375 real 0m15.382s 00:15:56.375 user 0m18.746s 00:15:56.375 sys 0m2.265s 00:15:56.375 16:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.375 16:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.375 ************************************ 00:15:56.375 END TEST raid5f_rebuild_test 00:15:56.375 ************************************ 00:15:56.634 16:17:11 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:56.634 16:17:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:56.634 16:17:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.634 16:17:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.634 ************************************ 00:15:56.634 START TEST raid5f_rebuild_test_sb 00:15:56.635 ************************************ 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82016 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82016 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82016 ']' 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.635 16:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.635 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:56.635 Zero copy mechanism will not be used. 00:15:56.635 [2024-09-28 16:17:11.249757] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:56.635 [2024-09-28 16:17:11.249927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82016 ] 00:15:56.894 [2024-09-28 16:17:11.416450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.153 [2024-09-28 16:17:11.608113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.153 [2024-09-28 16:17:11.793551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.153 [2024-09-28 16:17:11.793587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.412 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.412 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:57.412 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:57.412 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:57.412 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.412 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.672 BaseBdev1_malloc 00:15:57.672 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.672 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:57.672 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.672 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.672 [2024-09-28 16:17:12.121503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:57.672 [2024-09-28 16:17:12.121575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.672 [2024-09-28 16:17:12.121595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:57.673 [2024-09-28 16:17:12.121608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.673 [2024-09-28 16:17:12.123604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.673 [2024-09-28 16:17:12.123644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:57.673 BaseBdev1 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 BaseBdev2_malloc 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 [2024-09-28 16:17:12.209096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:57.673 [2024-09-28 16:17:12.209156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.673 [2024-09-28 16:17:12.209172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:57.673 [2024-09-28 16:17:12.209183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.673 [2024-09-28 16:17:12.211164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.673 [2024-09-28 16:17:12.211201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:57.673 BaseBdev2 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 BaseBdev3_malloc 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 [2024-09-28 16:17:12.260794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:57.673 [2024-09-28 16:17:12.260862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.673 [2024-09-28 16:17:12.260880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:57.673 [2024-09-28 16:17:12.260890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.673 [2024-09-28 16:17:12.262803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.673 [2024-09-28 16:17:12.262844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:57.673 BaseBdev3 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 spare_malloc 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 spare_delay 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 [2024-09-28 16:17:12.326747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:57.673 [2024-09-28 16:17:12.326799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.673 [2024-09-28 16:17:12.326815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:57.673 [2024-09-28 16:17:12.326825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.673 [2024-09-28 16:17:12.328782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.673 [2024-09-28 16:17:12.328838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:57.673 spare 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.673 [2024-09-28 16:17:12.338803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.673 [2024-09-28 16:17:12.340493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.673 [2024-09-28 16:17:12.340567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.673 [2024-09-28 16:17:12.340732] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:57.673 [2024-09-28 16:17:12.340743] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:57.673 [2024-09-28 16:17:12.340958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:57.673 [2024-09-28 16:17:12.346105] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:57.673 [2024-09-28 16:17:12.346130] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:57.673 [2024-09-28 16:17:12.346323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.673 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.932 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.932 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.932 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.932 "name": "raid_bdev1", 00:15:57.932 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:15:57.933 "strip_size_kb": 64, 00:15:57.933 "state": "online", 00:15:57.933 "raid_level": "raid5f", 00:15:57.933 "superblock": true, 00:15:57.933 "num_base_bdevs": 3, 00:15:57.933 "num_base_bdevs_discovered": 3, 00:15:57.933 "num_base_bdevs_operational": 3, 00:15:57.933 "base_bdevs_list": [ 00:15:57.933 { 00:15:57.933 "name": "BaseBdev1", 00:15:57.933 "uuid": "fea5d618-2d9f-5584-b09b-a6c9fb0cd0e0", 00:15:57.933 "is_configured": true, 00:15:57.933 "data_offset": 2048, 00:15:57.933 "data_size": 63488 00:15:57.933 }, 00:15:57.933 { 00:15:57.933 "name": "BaseBdev2", 00:15:57.933 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:15:57.933 "is_configured": true, 00:15:57.933 "data_offset": 2048, 00:15:57.933 "data_size": 63488 00:15:57.933 }, 00:15:57.933 { 00:15:57.933 "name": "BaseBdev3", 00:15:57.933 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:15:57.933 "is_configured": true, 00:15:57.933 "data_offset": 2048, 00:15:57.933 "data_size": 63488 00:15:57.933 } 00:15:57.933 ] 00:15:57.933 }' 00:15:57.933 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.933 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:58.192 [2024-09-28 16:17:12.827614] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.192 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.452 16:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:58.452 [2024-09-28 16:17:13.079361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:58.452 /dev/nbd0 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:58.452 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.452 1+0 records in 00:15:58.452 1+0 records out 00:15:58.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004015 s, 10.2 MB/s 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:58.712 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:59.281 496+0 records in 00:15:59.281 496+0 records out 00:15:59.281 65011712 bytes (65 MB, 62 MiB) copied, 0.615444 s, 106 MB/s 00:15:59.281 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:59.281 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.281 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:59.281 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.281 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:59.281 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.281 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.541 [2024-09-28 16:17:13.983396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.541 16:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.541 [2024-09-28 16:17:14.001870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.541 "name": "raid_bdev1", 00:15:59.541 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:15:59.541 "strip_size_kb": 64, 00:15:59.541 "state": "online", 00:15:59.541 "raid_level": "raid5f", 00:15:59.541 "superblock": true, 00:15:59.541 "num_base_bdevs": 3, 00:15:59.541 "num_base_bdevs_discovered": 2, 00:15:59.541 "num_base_bdevs_operational": 2, 00:15:59.541 "base_bdevs_list": [ 00:15:59.541 { 00:15:59.541 "name": null, 00:15:59.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.541 "is_configured": false, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 63488 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev2", 00:15:59.541 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 2048, 00:15:59.541 "data_size": 63488 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev3", 00:15:59.541 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 2048, 00:15:59.541 "data_size": 63488 00:15:59.541 } 00:15:59.541 ] 00:15:59.541 }' 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.541 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.801 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.801 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.801 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.801 [2024-09-28 16:17:14.445129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.801 [2024-09-28 16:17:14.459163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:59.801 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.801 16:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:59.801 [2024-09-28 16:17:14.466148] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.182 "name": "raid_bdev1", 00:16:01.182 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:01.182 "strip_size_kb": 64, 00:16:01.182 "state": "online", 00:16:01.182 "raid_level": "raid5f", 00:16:01.182 "superblock": true, 00:16:01.182 "num_base_bdevs": 3, 00:16:01.182 "num_base_bdevs_discovered": 3, 00:16:01.182 "num_base_bdevs_operational": 3, 00:16:01.182 "process": { 00:16:01.182 "type": "rebuild", 00:16:01.182 "target": "spare", 00:16:01.182 "progress": { 00:16:01.182 "blocks": 20480, 00:16:01.182 "percent": 16 00:16:01.182 } 00:16:01.182 }, 00:16:01.182 "base_bdevs_list": [ 00:16:01.182 { 00:16:01.182 "name": "spare", 00:16:01.182 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:01.182 "is_configured": true, 00:16:01.182 "data_offset": 2048, 00:16:01.182 "data_size": 63488 00:16:01.182 }, 00:16:01.182 { 00:16:01.182 "name": "BaseBdev2", 00:16:01.182 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:01.182 "is_configured": true, 00:16:01.182 "data_offset": 2048, 00:16:01.182 "data_size": 63488 00:16:01.182 }, 00:16:01.182 { 00:16:01.182 "name": "BaseBdev3", 00:16:01.182 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:01.182 "is_configured": true, 00:16:01.182 "data_offset": 2048, 00:16:01.182 "data_size": 63488 00:16:01.182 } 00:16:01.182 ] 00:16:01.182 }' 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.182 [2024-09-28 16:17:15.620986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.182 [2024-09-28 16:17:15.673094] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.182 [2024-09-28 16:17:15.673148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.182 [2024-09-28 16:17:15.673164] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.182 [2024-09-28 16:17:15.673172] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.182 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.183 "name": "raid_bdev1", 00:16:01.183 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:01.183 "strip_size_kb": 64, 00:16:01.183 "state": "online", 00:16:01.183 "raid_level": "raid5f", 00:16:01.183 "superblock": true, 00:16:01.183 "num_base_bdevs": 3, 00:16:01.183 "num_base_bdevs_discovered": 2, 00:16:01.183 "num_base_bdevs_operational": 2, 00:16:01.183 "base_bdevs_list": [ 00:16:01.183 { 00:16:01.183 "name": null, 00:16:01.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.183 "is_configured": false, 00:16:01.183 "data_offset": 0, 00:16:01.183 "data_size": 63488 00:16:01.183 }, 00:16:01.183 { 00:16:01.183 "name": "BaseBdev2", 00:16:01.183 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:01.183 "is_configured": true, 00:16:01.183 "data_offset": 2048, 00:16:01.183 "data_size": 63488 00:16:01.183 }, 00:16:01.183 { 00:16:01.183 "name": "BaseBdev3", 00:16:01.183 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:01.183 "is_configured": true, 00:16:01.183 "data_offset": 2048, 00:16:01.183 "data_size": 63488 00:16:01.183 } 00:16:01.183 ] 00:16:01.183 }' 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.183 16:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.700 "name": "raid_bdev1", 00:16:01.700 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:01.700 "strip_size_kb": 64, 00:16:01.700 "state": "online", 00:16:01.700 "raid_level": "raid5f", 00:16:01.700 "superblock": true, 00:16:01.700 "num_base_bdevs": 3, 00:16:01.700 "num_base_bdevs_discovered": 2, 00:16:01.700 "num_base_bdevs_operational": 2, 00:16:01.700 "base_bdevs_list": [ 00:16:01.700 { 00:16:01.700 "name": null, 00:16:01.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.700 "is_configured": false, 00:16:01.700 "data_offset": 0, 00:16:01.700 "data_size": 63488 00:16:01.700 }, 00:16:01.700 { 00:16:01.700 "name": "BaseBdev2", 00:16:01.700 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:01.700 "is_configured": true, 00:16:01.700 "data_offset": 2048, 00:16:01.700 "data_size": 63488 00:16:01.700 }, 00:16:01.700 { 00:16:01.700 "name": "BaseBdev3", 00:16:01.700 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:01.700 "is_configured": true, 00:16:01.700 "data_offset": 2048, 00:16:01.700 "data_size": 63488 00:16:01.700 } 00:16:01.700 ] 00:16:01.700 }' 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.700 [2024-09-28 16:17:16.243269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.700 [2024-09-28 16:17:16.256834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.700 16:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:01.700 [2024-09-28 16:17:16.263872] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.640 "name": "raid_bdev1", 00:16:02.640 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:02.640 "strip_size_kb": 64, 00:16:02.640 "state": "online", 00:16:02.640 "raid_level": "raid5f", 00:16:02.640 "superblock": true, 00:16:02.640 "num_base_bdevs": 3, 00:16:02.640 "num_base_bdevs_discovered": 3, 00:16:02.640 "num_base_bdevs_operational": 3, 00:16:02.640 "process": { 00:16:02.640 "type": "rebuild", 00:16:02.640 "target": "spare", 00:16:02.640 "progress": { 00:16:02.640 "blocks": 20480, 00:16:02.640 "percent": 16 00:16:02.640 } 00:16:02.640 }, 00:16:02.640 "base_bdevs_list": [ 00:16:02.640 { 00:16:02.640 "name": "spare", 00:16:02.640 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:02.640 "is_configured": true, 00:16:02.640 "data_offset": 2048, 00:16:02.640 "data_size": 63488 00:16:02.640 }, 00:16:02.640 { 00:16:02.640 "name": "BaseBdev2", 00:16:02.640 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:02.640 "is_configured": true, 00:16:02.640 "data_offset": 2048, 00:16:02.640 "data_size": 63488 00:16:02.640 }, 00:16:02.640 { 00:16:02.640 "name": "BaseBdev3", 00:16:02.640 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:02.640 "is_configured": true, 00:16:02.640 "data_offset": 2048, 00:16:02.640 "data_size": 63488 00:16:02.640 } 00:16:02.640 ] 00:16:02.640 }' 00:16:02.640 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:02.900 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=570 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.900 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.900 "name": "raid_bdev1", 00:16:02.900 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:02.900 "strip_size_kb": 64, 00:16:02.900 "state": "online", 00:16:02.900 "raid_level": "raid5f", 00:16:02.900 "superblock": true, 00:16:02.900 "num_base_bdevs": 3, 00:16:02.900 "num_base_bdevs_discovered": 3, 00:16:02.900 "num_base_bdevs_operational": 3, 00:16:02.900 "process": { 00:16:02.900 "type": "rebuild", 00:16:02.900 "target": "spare", 00:16:02.900 "progress": { 00:16:02.900 "blocks": 22528, 00:16:02.900 "percent": 17 00:16:02.900 } 00:16:02.900 }, 00:16:02.900 "base_bdevs_list": [ 00:16:02.900 { 00:16:02.900 "name": "spare", 00:16:02.900 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:02.900 "is_configured": true, 00:16:02.900 "data_offset": 2048, 00:16:02.900 "data_size": 63488 00:16:02.900 }, 00:16:02.901 { 00:16:02.901 "name": "BaseBdev2", 00:16:02.901 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:02.901 "is_configured": true, 00:16:02.901 "data_offset": 2048, 00:16:02.901 "data_size": 63488 00:16:02.901 }, 00:16:02.901 { 00:16:02.901 "name": "BaseBdev3", 00:16:02.901 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:02.901 "is_configured": true, 00:16:02.901 "data_offset": 2048, 00:16:02.901 "data_size": 63488 00:16:02.901 } 00:16:02.901 ] 00:16:02.901 }' 00:16:02.901 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.901 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.901 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.901 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.901 16:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.280 "name": "raid_bdev1", 00:16:04.280 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:04.280 "strip_size_kb": 64, 00:16:04.280 "state": "online", 00:16:04.280 "raid_level": "raid5f", 00:16:04.280 "superblock": true, 00:16:04.280 "num_base_bdevs": 3, 00:16:04.280 "num_base_bdevs_discovered": 3, 00:16:04.280 "num_base_bdevs_operational": 3, 00:16:04.280 "process": { 00:16:04.280 "type": "rebuild", 00:16:04.280 "target": "spare", 00:16:04.280 "progress": { 00:16:04.280 "blocks": 47104, 00:16:04.280 "percent": 37 00:16:04.280 } 00:16:04.280 }, 00:16:04.280 "base_bdevs_list": [ 00:16:04.280 { 00:16:04.280 "name": "spare", 00:16:04.280 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:04.280 "is_configured": true, 00:16:04.280 "data_offset": 2048, 00:16:04.280 "data_size": 63488 00:16:04.280 }, 00:16:04.280 { 00:16:04.280 "name": "BaseBdev2", 00:16:04.280 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:04.280 "is_configured": true, 00:16:04.280 "data_offset": 2048, 00:16:04.280 "data_size": 63488 00:16:04.280 }, 00:16:04.280 { 00:16:04.280 "name": "BaseBdev3", 00:16:04.280 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:04.280 "is_configured": true, 00:16:04.280 "data_offset": 2048, 00:16:04.280 "data_size": 63488 00:16:04.280 } 00:16:04.280 ] 00:16:04.280 }' 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.280 16:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.217 "name": "raid_bdev1", 00:16:05.217 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:05.217 "strip_size_kb": 64, 00:16:05.217 "state": "online", 00:16:05.217 "raid_level": "raid5f", 00:16:05.217 "superblock": true, 00:16:05.217 "num_base_bdevs": 3, 00:16:05.217 "num_base_bdevs_discovered": 3, 00:16:05.217 "num_base_bdevs_operational": 3, 00:16:05.217 "process": { 00:16:05.217 "type": "rebuild", 00:16:05.217 "target": "spare", 00:16:05.217 "progress": { 00:16:05.217 "blocks": 69632, 00:16:05.217 "percent": 54 00:16:05.217 } 00:16:05.217 }, 00:16:05.217 "base_bdevs_list": [ 00:16:05.217 { 00:16:05.217 "name": "spare", 00:16:05.217 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:05.217 "is_configured": true, 00:16:05.217 "data_offset": 2048, 00:16:05.217 "data_size": 63488 00:16:05.217 }, 00:16:05.217 { 00:16:05.217 "name": "BaseBdev2", 00:16:05.217 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:05.217 "is_configured": true, 00:16:05.217 "data_offset": 2048, 00:16:05.217 "data_size": 63488 00:16:05.217 }, 00:16:05.217 { 00:16:05.217 "name": "BaseBdev3", 00:16:05.217 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:05.217 "is_configured": true, 00:16:05.217 "data_offset": 2048, 00:16:05.217 "data_size": 63488 00:16:05.217 } 00:16:05.217 ] 00:16:05.217 }' 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.217 16:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.598 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.598 "name": "raid_bdev1", 00:16:06.598 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:06.598 "strip_size_kb": 64, 00:16:06.598 "state": "online", 00:16:06.598 "raid_level": "raid5f", 00:16:06.598 "superblock": true, 00:16:06.598 "num_base_bdevs": 3, 00:16:06.598 "num_base_bdevs_discovered": 3, 00:16:06.598 "num_base_bdevs_operational": 3, 00:16:06.598 "process": { 00:16:06.598 "type": "rebuild", 00:16:06.598 "target": "spare", 00:16:06.598 "progress": { 00:16:06.598 "blocks": 92160, 00:16:06.599 "percent": 72 00:16:06.599 } 00:16:06.599 }, 00:16:06.599 "base_bdevs_list": [ 00:16:06.599 { 00:16:06.599 "name": "spare", 00:16:06.599 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:06.599 "is_configured": true, 00:16:06.599 "data_offset": 2048, 00:16:06.599 "data_size": 63488 00:16:06.599 }, 00:16:06.599 { 00:16:06.599 "name": "BaseBdev2", 00:16:06.599 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:06.599 "is_configured": true, 00:16:06.599 "data_offset": 2048, 00:16:06.599 "data_size": 63488 00:16:06.599 }, 00:16:06.599 { 00:16:06.599 "name": "BaseBdev3", 00:16:06.599 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:06.599 "is_configured": true, 00:16:06.599 "data_offset": 2048, 00:16:06.599 "data_size": 63488 00:16:06.599 } 00:16:06.599 ] 00:16:06.599 }' 00:16:06.599 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.599 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.599 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.599 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.599 16:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.539 16:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.539 16:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.539 16:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.539 16:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.539 16:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.539 16:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.539 "name": "raid_bdev1", 00:16:07.539 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:07.539 "strip_size_kb": 64, 00:16:07.539 "state": "online", 00:16:07.539 "raid_level": "raid5f", 00:16:07.539 "superblock": true, 00:16:07.539 "num_base_bdevs": 3, 00:16:07.539 "num_base_bdevs_discovered": 3, 00:16:07.539 "num_base_bdevs_operational": 3, 00:16:07.539 "process": { 00:16:07.539 "type": "rebuild", 00:16:07.539 "target": "spare", 00:16:07.539 "progress": { 00:16:07.539 "blocks": 116736, 00:16:07.539 "percent": 91 00:16:07.539 } 00:16:07.539 }, 00:16:07.539 "base_bdevs_list": [ 00:16:07.539 { 00:16:07.539 "name": "spare", 00:16:07.539 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:07.539 "is_configured": true, 00:16:07.539 "data_offset": 2048, 00:16:07.539 "data_size": 63488 00:16:07.539 }, 00:16:07.539 { 00:16:07.539 "name": "BaseBdev2", 00:16:07.539 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:07.539 "is_configured": true, 00:16:07.539 "data_offset": 2048, 00:16:07.539 "data_size": 63488 00:16:07.539 }, 00:16:07.539 { 00:16:07.539 "name": "BaseBdev3", 00:16:07.539 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:07.539 "is_configured": true, 00:16:07.539 "data_offset": 2048, 00:16:07.539 "data_size": 63488 00:16:07.539 } 00:16:07.539 ] 00:16:07.539 }' 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.539 16:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.109 [2024-09-28 16:17:22.496814] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:08.109 [2024-09-28 16:17:22.496885] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:08.109 [2024-09-28 16:17:22.496980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.680 "name": "raid_bdev1", 00:16:08.680 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:08.680 "strip_size_kb": 64, 00:16:08.680 "state": "online", 00:16:08.680 "raid_level": "raid5f", 00:16:08.680 "superblock": true, 00:16:08.680 "num_base_bdevs": 3, 00:16:08.680 "num_base_bdevs_discovered": 3, 00:16:08.680 "num_base_bdevs_operational": 3, 00:16:08.680 "base_bdevs_list": [ 00:16:08.680 { 00:16:08.680 "name": "spare", 00:16:08.680 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 }, 00:16:08.680 { 00:16:08.680 "name": "BaseBdev2", 00:16:08.680 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 }, 00:16:08.680 { 00:16:08.680 "name": "BaseBdev3", 00:16:08.680 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 } 00:16:08.680 ] 00:16:08.680 }' 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.680 "name": "raid_bdev1", 00:16:08.680 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:08.680 "strip_size_kb": 64, 00:16:08.680 "state": "online", 00:16:08.680 "raid_level": "raid5f", 00:16:08.680 "superblock": true, 00:16:08.680 "num_base_bdevs": 3, 00:16:08.680 "num_base_bdevs_discovered": 3, 00:16:08.680 "num_base_bdevs_operational": 3, 00:16:08.680 "base_bdevs_list": [ 00:16:08.680 { 00:16:08.680 "name": "spare", 00:16:08.680 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 }, 00:16:08.680 { 00:16:08.680 "name": "BaseBdev2", 00:16:08.680 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 }, 00:16:08.680 { 00:16:08.680 "name": "BaseBdev3", 00:16:08.680 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 } 00:16:08.680 ] 00:16:08.680 }' 00:16:08.680 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.940 "name": "raid_bdev1", 00:16:08.940 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:08.940 "strip_size_kb": 64, 00:16:08.940 "state": "online", 00:16:08.940 "raid_level": "raid5f", 00:16:08.940 "superblock": true, 00:16:08.940 "num_base_bdevs": 3, 00:16:08.940 "num_base_bdevs_discovered": 3, 00:16:08.940 "num_base_bdevs_operational": 3, 00:16:08.940 "base_bdevs_list": [ 00:16:08.940 { 00:16:08.940 "name": "spare", 00:16:08.940 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:08.940 "is_configured": true, 00:16:08.940 "data_offset": 2048, 00:16:08.940 "data_size": 63488 00:16:08.940 }, 00:16:08.940 { 00:16:08.940 "name": "BaseBdev2", 00:16:08.940 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:08.940 "is_configured": true, 00:16:08.940 "data_offset": 2048, 00:16:08.940 "data_size": 63488 00:16:08.940 }, 00:16:08.940 { 00:16:08.940 "name": "BaseBdev3", 00:16:08.940 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:08.940 "is_configured": true, 00:16:08.940 "data_offset": 2048, 00:16:08.940 "data_size": 63488 00:16:08.940 } 00:16:08.940 ] 00:16:08.940 }' 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.940 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.510 [2024-09-28 16:17:23.892878] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.510 [2024-09-28 16:17:23.892908] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.510 [2024-09-28 16:17:23.892978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.510 [2024-09-28 16:17:23.893044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.510 [2024-09-28 16:17:23.893059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.510 16:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:09.510 /dev/nbd0 00:16:09.510 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.510 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.510 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:09.510 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:09.510 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:09.510 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.511 1+0 records in 00:16:09.511 1+0 records out 00:16:09.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228368 s, 17.9 MB/s 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.511 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:09.771 /dev/nbd1 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.771 1+0 records in 00:16:09.771 1+0 records out 00:16:09.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049552 s, 8.3 MB/s 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.771 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:10.031 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:10.031 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.031 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.031 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.031 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:10.031 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.031 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.291 16:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.552 [2024-09-28 16:17:25.043858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.552 [2024-09-28 16:17:25.043936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.552 [2024-09-28 16:17:25.043955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:10.552 [2024-09-28 16:17:25.043967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.552 [2024-09-28 16:17:25.046046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.552 [2024-09-28 16:17:25.046089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.552 [2024-09-28 16:17:25.046167] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.552 [2024-09-28 16:17:25.046242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.552 [2024-09-28 16:17:25.046393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.552 [2024-09-28 16:17:25.046508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.552 spare 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.552 [2024-09-28 16:17:25.146396] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:10.552 [2024-09-28 16:17:25.146425] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:10.552 [2024-09-28 16:17:25.146672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:10.552 [2024-09-28 16:17:25.151632] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:10.552 [2024-09-28 16:17:25.151656] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:10.552 [2024-09-28 16:17:25.151821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.552 "name": "raid_bdev1", 00:16:10.552 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:10.552 "strip_size_kb": 64, 00:16:10.552 "state": "online", 00:16:10.552 "raid_level": "raid5f", 00:16:10.552 "superblock": true, 00:16:10.552 "num_base_bdevs": 3, 00:16:10.552 "num_base_bdevs_discovered": 3, 00:16:10.552 "num_base_bdevs_operational": 3, 00:16:10.552 "base_bdevs_list": [ 00:16:10.552 { 00:16:10.552 "name": "spare", 00:16:10.552 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:10.552 "is_configured": true, 00:16:10.552 "data_offset": 2048, 00:16:10.552 "data_size": 63488 00:16:10.552 }, 00:16:10.552 { 00:16:10.552 "name": "BaseBdev2", 00:16:10.552 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:10.552 "is_configured": true, 00:16:10.552 "data_offset": 2048, 00:16:10.552 "data_size": 63488 00:16:10.552 }, 00:16:10.552 { 00:16:10.552 "name": "BaseBdev3", 00:16:10.552 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:10.552 "is_configured": true, 00:16:10.552 "data_offset": 2048, 00:16:10.552 "data_size": 63488 00:16:10.552 } 00:16:10.552 ] 00:16:10.552 }' 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.552 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.157 "name": "raid_bdev1", 00:16:11.157 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:11.157 "strip_size_kb": 64, 00:16:11.157 "state": "online", 00:16:11.157 "raid_level": "raid5f", 00:16:11.157 "superblock": true, 00:16:11.157 "num_base_bdevs": 3, 00:16:11.157 "num_base_bdevs_discovered": 3, 00:16:11.157 "num_base_bdevs_operational": 3, 00:16:11.157 "base_bdevs_list": [ 00:16:11.157 { 00:16:11.157 "name": "spare", 00:16:11.157 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:11.157 "is_configured": true, 00:16:11.157 "data_offset": 2048, 00:16:11.157 "data_size": 63488 00:16:11.157 }, 00:16:11.157 { 00:16:11.157 "name": "BaseBdev2", 00:16:11.157 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:11.157 "is_configured": true, 00:16:11.157 "data_offset": 2048, 00:16:11.157 "data_size": 63488 00:16:11.157 }, 00:16:11.157 { 00:16:11.157 "name": "BaseBdev3", 00:16:11.157 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:11.157 "is_configured": true, 00:16:11.157 "data_offset": 2048, 00:16:11.157 "data_size": 63488 00:16:11.157 } 00:16:11.157 ] 00:16:11.157 }' 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.157 [2024-09-28 16:17:25.756593] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.157 "name": "raid_bdev1", 00:16:11.157 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:11.157 "strip_size_kb": 64, 00:16:11.157 "state": "online", 00:16:11.157 "raid_level": "raid5f", 00:16:11.157 "superblock": true, 00:16:11.157 "num_base_bdevs": 3, 00:16:11.157 "num_base_bdevs_discovered": 2, 00:16:11.157 "num_base_bdevs_operational": 2, 00:16:11.157 "base_bdevs_list": [ 00:16:11.157 { 00:16:11.157 "name": null, 00:16:11.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.157 "is_configured": false, 00:16:11.157 "data_offset": 0, 00:16:11.157 "data_size": 63488 00:16:11.157 }, 00:16:11.157 { 00:16:11.157 "name": "BaseBdev2", 00:16:11.157 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:11.157 "is_configured": true, 00:16:11.157 "data_offset": 2048, 00:16:11.157 "data_size": 63488 00:16:11.157 }, 00:16:11.157 { 00:16:11.157 "name": "BaseBdev3", 00:16:11.157 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:11.157 "is_configured": true, 00:16:11.157 "data_offset": 2048, 00:16:11.157 "data_size": 63488 00:16:11.157 } 00:16:11.157 ] 00:16:11.157 }' 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.157 16:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.737 16:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.737 16:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.737 16:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.737 [2024-09-28 16:17:26.147983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.737 [2024-09-28 16:17:26.148129] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.737 [2024-09-28 16:17:26.148146] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:11.737 [2024-09-28 16:17:26.148181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.737 [2024-09-28 16:17:26.161950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:11.737 16:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.737 16:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:11.737 [2024-09-28 16:17:26.168763] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.677 "name": "raid_bdev1", 00:16:12.677 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:12.677 "strip_size_kb": 64, 00:16:12.677 "state": "online", 00:16:12.677 "raid_level": "raid5f", 00:16:12.677 "superblock": true, 00:16:12.677 "num_base_bdevs": 3, 00:16:12.677 "num_base_bdevs_discovered": 3, 00:16:12.677 "num_base_bdevs_operational": 3, 00:16:12.677 "process": { 00:16:12.677 "type": "rebuild", 00:16:12.677 "target": "spare", 00:16:12.677 "progress": { 00:16:12.677 "blocks": 20480, 00:16:12.677 "percent": 16 00:16:12.677 } 00:16:12.677 }, 00:16:12.677 "base_bdevs_list": [ 00:16:12.677 { 00:16:12.677 "name": "spare", 00:16:12.677 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:12.677 "is_configured": true, 00:16:12.677 "data_offset": 2048, 00:16:12.677 "data_size": 63488 00:16:12.677 }, 00:16:12.677 { 00:16:12.677 "name": "BaseBdev2", 00:16:12.677 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:12.677 "is_configured": true, 00:16:12.677 "data_offset": 2048, 00:16:12.677 "data_size": 63488 00:16:12.677 }, 00:16:12.677 { 00:16:12.677 "name": "BaseBdev3", 00:16:12.677 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:12.677 "is_configured": true, 00:16:12.677 "data_offset": 2048, 00:16:12.677 "data_size": 63488 00:16:12.677 } 00:16:12.677 ] 00:16:12.677 }' 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.677 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.677 [2024-09-28 16:17:27.307688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.937 [2024-09-28 16:17:27.375709] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.937 [2024-09-28 16:17:27.375784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.937 [2024-09-28 16:17:27.375800] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.937 [2024-09-28 16:17:27.375809] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.937 "name": "raid_bdev1", 00:16:12.937 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:12.937 "strip_size_kb": 64, 00:16:12.937 "state": "online", 00:16:12.937 "raid_level": "raid5f", 00:16:12.937 "superblock": true, 00:16:12.937 "num_base_bdevs": 3, 00:16:12.937 "num_base_bdevs_discovered": 2, 00:16:12.937 "num_base_bdevs_operational": 2, 00:16:12.937 "base_bdevs_list": [ 00:16:12.937 { 00:16:12.937 "name": null, 00:16:12.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.937 "is_configured": false, 00:16:12.937 "data_offset": 0, 00:16:12.937 "data_size": 63488 00:16:12.937 }, 00:16:12.937 { 00:16:12.937 "name": "BaseBdev2", 00:16:12.937 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:12.937 "is_configured": true, 00:16:12.937 "data_offset": 2048, 00:16:12.937 "data_size": 63488 00:16:12.937 }, 00:16:12.937 { 00:16:12.937 "name": "BaseBdev3", 00:16:12.937 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:12.937 "is_configured": true, 00:16:12.937 "data_offset": 2048, 00:16:12.937 "data_size": 63488 00:16:12.937 } 00:16:12.937 ] 00:16:12.937 }' 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.937 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.506 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:13.506 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.506 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.506 [2024-09-28 16:17:27.905602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:13.506 [2024-09-28 16:17:27.905660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.507 [2024-09-28 16:17:27.905679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:13.507 [2024-09-28 16:17:27.905692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.507 [2024-09-28 16:17:27.906108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.507 [2024-09-28 16:17:27.906135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:13.507 [2024-09-28 16:17:27.906210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:13.507 [2024-09-28 16:17:27.906244] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.507 [2024-09-28 16:17:27.906254] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:13.507 [2024-09-28 16:17:27.906275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.507 [2024-09-28 16:17:27.919487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:13.507 spare 00:16:13.507 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.507 16:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:13.507 [2024-09-28 16:17:27.925919] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.446 "name": "raid_bdev1", 00:16:14.446 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:14.446 "strip_size_kb": 64, 00:16:14.446 "state": "online", 00:16:14.446 "raid_level": "raid5f", 00:16:14.446 "superblock": true, 00:16:14.446 "num_base_bdevs": 3, 00:16:14.446 "num_base_bdevs_discovered": 3, 00:16:14.446 "num_base_bdevs_operational": 3, 00:16:14.446 "process": { 00:16:14.446 "type": "rebuild", 00:16:14.446 "target": "spare", 00:16:14.446 "progress": { 00:16:14.446 "blocks": 20480, 00:16:14.446 "percent": 16 00:16:14.446 } 00:16:14.446 }, 00:16:14.446 "base_bdevs_list": [ 00:16:14.446 { 00:16:14.446 "name": "spare", 00:16:14.446 "uuid": "8a40c3d5-e521-57ea-bd9e-7e72bedb1986", 00:16:14.446 "is_configured": true, 00:16:14.446 "data_offset": 2048, 00:16:14.446 "data_size": 63488 00:16:14.446 }, 00:16:14.446 { 00:16:14.446 "name": "BaseBdev2", 00:16:14.446 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:14.446 "is_configured": true, 00:16:14.446 "data_offset": 2048, 00:16:14.446 "data_size": 63488 00:16:14.446 }, 00:16:14.446 { 00:16:14.446 "name": "BaseBdev3", 00:16:14.446 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:14.446 "is_configured": true, 00:16:14.446 "data_offset": 2048, 00:16:14.446 "data_size": 63488 00:16:14.446 } 00:16:14.446 ] 00:16:14.446 }' 00:16:14.446 16:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.446 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.446 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.446 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.446 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.446 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.446 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.446 [2024-09-28 16:17:29.056801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.706 [2024-09-28 16:17:29.132854] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.706 [2024-09-28 16:17:29.132907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.706 [2024-09-28 16:17:29.132942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.706 [2024-09-28 16:17:29.132949] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.706 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.706 "name": "raid_bdev1", 00:16:14.706 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:14.706 "strip_size_kb": 64, 00:16:14.707 "state": "online", 00:16:14.707 "raid_level": "raid5f", 00:16:14.707 "superblock": true, 00:16:14.707 "num_base_bdevs": 3, 00:16:14.707 "num_base_bdevs_discovered": 2, 00:16:14.707 "num_base_bdevs_operational": 2, 00:16:14.707 "base_bdevs_list": [ 00:16:14.707 { 00:16:14.707 "name": null, 00:16:14.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.707 "is_configured": false, 00:16:14.707 "data_offset": 0, 00:16:14.707 "data_size": 63488 00:16:14.707 }, 00:16:14.707 { 00:16:14.707 "name": "BaseBdev2", 00:16:14.707 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:14.707 "is_configured": true, 00:16:14.707 "data_offset": 2048, 00:16:14.707 "data_size": 63488 00:16:14.707 }, 00:16:14.707 { 00:16:14.707 "name": "BaseBdev3", 00:16:14.707 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:14.707 "is_configured": true, 00:16:14.707 "data_offset": 2048, 00:16:14.707 "data_size": 63488 00:16:14.707 } 00:16:14.707 ] 00:16:14.707 }' 00:16:14.707 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.707 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.966 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.967 "name": "raid_bdev1", 00:16:14.967 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:14.967 "strip_size_kb": 64, 00:16:14.967 "state": "online", 00:16:14.967 "raid_level": "raid5f", 00:16:14.967 "superblock": true, 00:16:14.967 "num_base_bdevs": 3, 00:16:14.967 "num_base_bdevs_discovered": 2, 00:16:14.967 "num_base_bdevs_operational": 2, 00:16:14.967 "base_bdevs_list": [ 00:16:14.967 { 00:16:14.967 "name": null, 00:16:14.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.967 "is_configured": false, 00:16:14.967 "data_offset": 0, 00:16:14.967 "data_size": 63488 00:16:14.967 }, 00:16:14.967 { 00:16:14.967 "name": "BaseBdev2", 00:16:14.967 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:14.967 "is_configured": true, 00:16:14.967 "data_offset": 2048, 00:16:14.967 "data_size": 63488 00:16:14.967 }, 00:16:14.967 { 00:16:14.967 "name": "BaseBdev3", 00:16:14.967 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:14.967 "is_configured": true, 00:16:14.967 "data_offset": 2048, 00:16:14.967 "data_size": 63488 00:16:14.967 } 00:16:14.967 ] 00:16:14.967 }' 00:16:14.967 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.227 [2024-09-28 16:17:29.731216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.227 [2024-09-28 16:17:29.731276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.227 [2024-09-28 16:17:29.731314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:15.227 [2024-09-28 16:17:29.731323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.227 [2024-09-28 16:17:29.731720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.227 [2024-09-28 16:17:29.731745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.227 [2024-09-28 16:17:29.731816] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:15.227 [2024-09-28 16:17:29.731829] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.227 [2024-09-28 16:17:29.731841] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:15.227 [2024-09-28 16:17:29.731853] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:15.227 BaseBdev1 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.227 16:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:16.163 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.164 "name": "raid_bdev1", 00:16:16.164 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:16.164 "strip_size_kb": 64, 00:16:16.164 "state": "online", 00:16:16.164 "raid_level": "raid5f", 00:16:16.164 "superblock": true, 00:16:16.164 "num_base_bdevs": 3, 00:16:16.164 "num_base_bdevs_discovered": 2, 00:16:16.164 "num_base_bdevs_operational": 2, 00:16:16.164 "base_bdevs_list": [ 00:16:16.164 { 00:16:16.164 "name": null, 00:16:16.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.164 "is_configured": false, 00:16:16.164 "data_offset": 0, 00:16:16.164 "data_size": 63488 00:16:16.164 }, 00:16:16.164 { 00:16:16.164 "name": "BaseBdev2", 00:16:16.164 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:16.164 "is_configured": true, 00:16:16.164 "data_offset": 2048, 00:16:16.164 "data_size": 63488 00:16:16.164 }, 00:16:16.164 { 00:16:16.164 "name": "BaseBdev3", 00:16:16.164 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:16.164 "is_configured": true, 00:16:16.164 "data_offset": 2048, 00:16:16.164 "data_size": 63488 00:16:16.164 } 00:16:16.164 ] 00:16:16.164 }' 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.164 16:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.731 "name": "raid_bdev1", 00:16:16.731 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:16.731 "strip_size_kb": 64, 00:16:16.731 "state": "online", 00:16:16.731 "raid_level": "raid5f", 00:16:16.731 "superblock": true, 00:16:16.731 "num_base_bdevs": 3, 00:16:16.731 "num_base_bdevs_discovered": 2, 00:16:16.731 "num_base_bdevs_operational": 2, 00:16:16.731 "base_bdevs_list": [ 00:16:16.731 { 00:16:16.731 "name": null, 00:16:16.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.731 "is_configured": false, 00:16:16.731 "data_offset": 0, 00:16:16.731 "data_size": 63488 00:16:16.731 }, 00:16:16.731 { 00:16:16.731 "name": "BaseBdev2", 00:16:16.731 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:16.731 "is_configured": true, 00:16:16.731 "data_offset": 2048, 00:16:16.731 "data_size": 63488 00:16:16.731 }, 00:16:16.731 { 00:16:16.731 "name": "BaseBdev3", 00:16:16.731 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:16.731 "is_configured": true, 00:16:16.731 "data_offset": 2048, 00:16:16.731 "data_size": 63488 00:16:16.731 } 00:16:16.731 ] 00:16:16.731 }' 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.731 [2024-09-28 16:17:31.276989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.731 [2024-09-28 16:17:31.277116] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:16.731 [2024-09-28 16:17:31.277130] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:16.731 request: 00:16:16.731 { 00:16:16.731 "base_bdev": "BaseBdev1", 00:16:16.731 "raid_bdev": "raid_bdev1", 00:16:16.731 "method": "bdev_raid_add_base_bdev", 00:16:16.731 "req_id": 1 00:16:16.731 } 00:16:16.731 Got JSON-RPC error response 00:16:16.731 response: 00:16:16.731 { 00:16:16.731 "code": -22, 00:16:16.731 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:16.731 } 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:16.731 16:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.668 "name": "raid_bdev1", 00:16:17.668 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:17.668 "strip_size_kb": 64, 00:16:17.668 "state": "online", 00:16:17.668 "raid_level": "raid5f", 00:16:17.668 "superblock": true, 00:16:17.668 "num_base_bdevs": 3, 00:16:17.668 "num_base_bdevs_discovered": 2, 00:16:17.668 "num_base_bdevs_operational": 2, 00:16:17.668 "base_bdevs_list": [ 00:16:17.668 { 00:16:17.668 "name": null, 00:16:17.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.668 "is_configured": false, 00:16:17.668 "data_offset": 0, 00:16:17.668 "data_size": 63488 00:16:17.668 }, 00:16:17.668 { 00:16:17.668 "name": "BaseBdev2", 00:16:17.668 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:17.668 "is_configured": true, 00:16:17.668 "data_offset": 2048, 00:16:17.668 "data_size": 63488 00:16:17.668 }, 00:16:17.668 { 00:16:17.668 "name": "BaseBdev3", 00:16:17.668 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:17.668 "is_configured": true, 00:16:17.668 "data_offset": 2048, 00:16:17.668 "data_size": 63488 00:16:17.668 } 00:16:17.668 ] 00:16:17.668 }' 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.668 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.237 "name": "raid_bdev1", 00:16:18.237 "uuid": "e0dd6c2f-90d6-45eb-a05b-ee4104d61365", 00:16:18.237 "strip_size_kb": 64, 00:16:18.237 "state": "online", 00:16:18.237 "raid_level": "raid5f", 00:16:18.237 "superblock": true, 00:16:18.237 "num_base_bdevs": 3, 00:16:18.237 "num_base_bdevs_discovered": 2, 00:16:18.237 "num_base_bdevs_operational": 2, 00:16:18.237 "base_bdevs_list": [ 00:16:18.237 { 00:16:18.237 "name": null, 00:16:18.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.237 "is_configured": false, 00:16:18.237 "data_offset": 0, 00:16:18.237 "data_size": 63488 00:16:18.237 }, 00:16:18.237 { 00:16:18.237 "name": "BaseBdev2", 00:16:18.237 "uuid": "01844572-3178-59e7-9dfa-0651eb00de4b", 00:16:18.237 "is_configured": true, 00:16:18.237 "data_offset": 2048, 00:16:18.237 "data_size": 63488 00:16:18.237 }, 00:16:18.237 { 00:16:18.237 "name": "BaseBdev3", 00:16:18.237 "uuid": "e3136941-1caf-5872-8622-d95b5a0cc5c5", 00:16:18.237 "is_configured": true, 00:16:18.237 "data_offset": 2048, 00:16:18.237 "data_size": 63488 00:16:18.237 } 00:16:18.237 ] 00:16:18.237 }' 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82016 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82016 ']' 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82016 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82016 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82016' 00:16:18.237 killing process with pid 82016 00:16:18.237 Received shutdown signal, test time was about 60.000000 seconds 00:16:18.237 00:16:18.237 Latency(us) 00:16:18.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.237 =================================================================================================================== 00:16:18.237 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82016 00:16:18.237 [2024-09-28 16:17:32.841052] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.237 [2024-09-28 16:17:32.841158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.237 [2024-09-28 16:17:32.841211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.237 [2024-09-28 16:17:32.841222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:18.237 16:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82016 00:16:18.806 [2024-09-28 16:17:33.212216] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.743 16:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:19.743 00:16:19.743 real 0m23.250s 00:16:19.743 user 0m29.406s 00:16:19.743 sys 0m3.115s 00:16:19.743 16:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.743 16:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.743 ************************************ 00:16:19.743 END TEST raid5f_rebuild_test_sb 00:16:19.743 ************************************ 00:16:20.003 16:17:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:20.003 16:17:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:20.003 16:17:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:20.003 16:17:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.003 16:17:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.003 ************************************ 00:16:20.003 START TEST raid5f_state_function_test 00:16:20.003 ************************************ 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82775 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82775' 00:16:20.003 Process raid pid: 82775 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82775 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82775 ']' 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.003 16:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.003 [2024-09-28 16:17:34.575572] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:20.003 [2024-09-28 16:17:34.575801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.263 [2024-09-28 16:17:34.745954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.263 [2024-09-28 16:17:34.945273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.522 [2024-09-28 16:17:35.145287] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.523 [2024-09-28 16:17:35.145317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.782 [2024-09-28 16:17:35.384682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.782 [2024-09-28 16:17:35.384736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.782 [2024-09-28 16:17:35.384745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.782 [2024-09-28 16:17:35.384754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.782 [2024-09-28 16:17:35.384760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.782 [2024-09-28 16:17:35.384768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.782 [2024-09-28 16:17:35.384774] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:20.782 [2024-09-28 16:17:35.384784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.782 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.782 "name": "Existed_Raid", 00:16:20.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.782 "strip_size_kb": 64, 00:16:20.783 "state": "configuring", 00:16:20.783 "raid_level": "raid5f", 00:16:20.783 "superblock": false, 00:16:20.783 "num_base_bdevs": 4, 00:16:20.783 "num_base_bdevs_discovered": 0, 00:16:20.783 "num_base_bdevs_operational": 4, 00:16:20.783 "base_bdevs_list": [ 00:16:20.783 { 00:16:20.783 "name": "BaseBdev1", 00:16:20.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.783 "is_configured": false, 00:16:20.783 "data_offset": 0, 00:16:20.783 "data_size": 0 00:16:20.783 }, 00:16:20.783 { 00:16:20.783 "name": "BaseBdev2", 00:16:20.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.783 "is_configured": false, 00:16:20.783 "data_offset": 0, 00:16:20.783 "data_size": 0 00:16:20.783 }, 00:16:20.783 { 00:16:20.783 "name": "BaseBdev3", 00:16:20.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.783 "is_configured": false, 00:16:20.783 "data_offset": 0, 00:16:20.783 "data_size": 0 00:16:20.783 }, 00:16:20.783 { 00:16:20.783 "name": "BaseBdev4", 00:16:20.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.783 "is_configured": false, 00:16:20.783 "data_offset": 0, 00:16:20.783 "data_size": 0 00:16:20.783 } 00:16:20.783 ] 00:16:20.783 }' 00:16:20.783 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.783 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 [2024-09-28 16:17:35.855777] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.351 [2024-09-28 16:17:35.855876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 [2024-09-28 16:17:35.867776] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.351 [2024-09-28 16:17:35.867856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.351 [2024-09-28 16:17:35.867882] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.351 [2024-09-28 16:17:35.867903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.351 [2024-09-28 16:17:35.867919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.351 [2024-09-28 16:17:35.867938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.351 [2024-09-28 16:17:35.867954] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:21.351 [2024-09-28 16:17:35.867973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 [2024-09-28 16:17:35.950370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.351 BaseBdev1 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 [ 00:16:21.351 { 00:16:21.351 "name": "BaseBdev1", 00:16:21.351 "aliases": [ 00:16:21.351 "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5" 00:16:21.351 ], 00:16:21.351 "product_name": "Malloc disk", 00:16:21.351 "block_size": 512, 00:16:21.351 "num_blocks": 65536, 00:16:21.351 "uuid": "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5", 00:16:21.351 "assigned_rate_limits": { 00:16:21.351 "rw_ios_per_sec": 0, 00:16:21.351 "rw_mbytes_per_sec": 0, 00:16:21.351 "r_mbytes_per_sec": 0, 00:16:21.351 "w_mbytes_per_sec": 0 00:16:21.351 }, 00:16:21.351 "claimed": true, 00:16:21.351 "claim_type": "exclusive_write", 00:16:21.351 "zoned": false, 00:16:21.351 "supported_io_types": { 00:16:21.351 "read": true, 00:16:21.351 "write": true, 00:16:21.351 "unmap": true, 00:16:21.351 "flush": true, 00:16:21.351 "reset": true, 00:16:21.351 "nvme_admin": false, 00:16:21.351 "nvme_io": false, 00:16:21.351 "nvme_io_md": false, 00:16:21.351 "write_zeroes": true, 00:16:21.351 "zcopy": true, 00:16:21.351 "get_zone_info": false, 00:16:21.351 "zone_management": false, 00:16:21.351 "zone_append": false, 00:16:21.351 "compare": false, 00:16:21.351 "compare_and_write": false, 00:16:21.351 "abort": true, 00:16:21.351 "seek_hole": false, 00:16:21.351 "seek_data": false, 00:16:21.351 "copy": true, 00:16:21.351 "nvme_iov_md": false 00:16:21.351 }, 00:16:21.351 "memory_domains": [ 00:16:21.351 { 00:16:21.351 "dma_device_id": "system", 00:16:21.351 "dma_device_type": 1 00:16:21.351 }, 00:16:21.351 { 00:16:21.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.351 "dma_device_type": 2 00:16:21.351 } 00:16:21.351 ], 00:16:21.351 "driver_specific": {} 00:16:21.351 } 00:16:21.351 ] 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.351 16:17:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.610 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.610 "name": "Existed_Raid", 00:16:21.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.610 "strip_size_kb": 64, 00:16:21.610 "state": "configuring", 00:16:21.610 "raid_level": "raid5f", 00:16:21.610 "superblock": false, 00:16:21.610 "num_base_bdevs": 4, 00:16:21.610 "num_base_bdevs_discovered": 1, 00:16:21.610 "num_base_bdevs_operational": 4, 00:16:21.610 "base_bdevs_list": [ 00:16:21.610 { 00:16:21.610 "name": "BaseBdev1", 00:16:21.610 "uuid": "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5", 00:16:21.610 "is_configured": true, 00:16:21.610 "data_offset": 0, 00:16:21.610 "data_size": 65536 00:16:21.610 }, 00:16:21.610 { 00:16:21.610 "name": "BaseBdev2", 00:16:21.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.610 "is_configured": false, 00:16:21.610 "data_offset": 0, 00:16:21.610 "data_size": 0 00:16:21.610 }, 00:16:21.610 { 00:16:21.610 "name": "BaseBdev3", 00:16:21.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.610 "is_configured": false, 00:16:21.610 "data_offset": 0, 00:16:21.610 "data_size": 0 00:16:21.610 }, 00:16:21.610 { 00:16:21.610 "name": "BaseBdev4", 00:16:21.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.610 "is_configured": false, 00:16:21.610 "data_offset": 0, 00:16:21.610 "data_size": 0 00:16:21.610 } 00:16:21.610 ] 00:16:21.610 }' 00:16:21.610 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.610 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.870 [2024-09-28 16:17:36.485459] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.870 [2024-09-28 16:17:36.485499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.870 [2024-09-28 16:17:36.497484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.870 [2024-09-28 16:17:36.499190] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.870 [2024-09-28 16:17:36.499242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.870 [2024-09-28 16:17:36.499252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.870 [2024-09-28 16:17:36.499262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.870 [2024-09-28 16:17:36.499268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:21.870 [2024-09-28 16:17:36.499276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.870 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.130 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.130 "name": "Existed_Raid", 00:16:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.130 "strip_size_kb": 64, 00:16:22.130 "state": "configuring", 00:16:22.130 "raid_level": "raid5f", 00:16:22.130 "superblock": false, 00:16:22.130 "num_base_bdevs": 4, 00:16:22.130 "num_base_bdevs_discovered": 1, 00:16:22.130 "num_base_bdevs_operational": 4, 00:16:22.130 "base_bdevs_list": [ 00:16:22.130 { 00:16:22.130 "name": "BaseBdev1", 00:16:22.130 "uuid": "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5", 00:16:22.130 "is_configured": true, 00:16:22.130 "data_offset": 0, 00:16:22.130 "data_size": 65536 00:16:22.130 }, 00:16:22.130 { 00:16:22.130 "name": "BaseBdev2", 00:16:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.130 "is_configured": false, 00:16:22.130 "data_offset": 0, 00:16:22.130 "data_size": 0 00:16:22.130 }, 00:16:22.130 { 00:16:22.130 "name": "BaseBdev3", 00:16:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.130 "is_configured": false, 00:16:22.130 "data_offset": 0, 00:16:22.130 "data_size": 0 00:16:22.130 }, 00:16:22.130 { 00:16:22.130 "name": "BaseBdev4", 00:16:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.130 "is_configured": false, 00:16:22.130 "data_offset": 0, 00:16:22.130 "data_size": 0 00:16:22.130 } 00:16:22.130 ] 00:16:22.130 }' 00:16:22.130 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.130 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.390 16:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.390 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.390 16:17:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.391 [2024-09-28 16:17:37.007557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.391 BaseBdev2 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.391 [ 00:16:22.391 { 00:16:22.391 "name": "BaseBdev2", 00:16:22.391 "aliases": [ 00:16:22.391 "4d84b303-0900-49ff-b858-564a7a71d5b5" 00:16:22.391 ], 00:16:22.391 "product_name": "Malloc disk", 00:16:22.391 "block_size": 512, 00:16:22.391 "num_blocks": 65536, 00:16:22.391 "uuid": "4d84b303-0900-49ff-b858-564a7a71d5b5", 00:16:22.391 "assigned_rate_limits": { 00:16:22.391 "rw_ios_per_sec": 0, 00:16:22.391 "rw_mbytes_per_sec": 0, 00:16:22.391 "r_mbytes_per_sec": 0, 00:16:22.391 "w_mbytes_per_sec": 0 00:16:22.391 }, 00:16:22.391 "claimed": true, 00:16:22.391 "claim_type": "exclusive_write", 00:16:22.391 "zoned": false, 00:16:22.391 "supported_io_types": { 00:16:22.391 "read": true, 00:16:22.391 "write": true, 00:16:22.391 "unmap": true, 00:16:22.391 "flush": true, 00:16:22.391 "reset": true, 00:16:22.391 "nvme_admin": false, 00:16:22.391 "nvme_io": false, 00:16:22.391 "nvme_io_md": false, 00:16:22.391 "write_zeroes": true, 00:16:22.391 "zcopy": true, 00:16:22.391 "get_zone_info": false, 00:16:22.391 "zone_management": false, 00:16:22.391 "zone_append": false, 00:16:22.391 "compare": false, 00:16:22.391 "compare_and_write": false, 00:16:22.391 "abort": true, 00:16:22.391 "seek_hole": false, 00:16:22.391 "seek_data": false, 00:16:22.391 "copy": true, 00:16:22.391 "nvme_iov_md": false 00:16:22.391 }, 00:16:22.391 "memory_domains": [ 00:16:22.391 { 00:16:22.391 "dma_device_id": "system", 00:16:22.391 "dma_device_type": 1 00:16:22.391 }, 00:16:22.391 { 00:16:22.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.391 "dma_device_type": 2 00:16:22.391 } 00:16:22.391 ], 00:16:22.391 "driver_specific": {} 00:16:22.391 } 00:16:22.391 ] 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.391 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.651 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.651 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.651 "name": "Existed_Raid", 00:16:22.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.651 "strip_size_kb": 64, 00:16:22.651 "state": "configuring", 00:16:22.651 "raid_level": "raid5f", 00:16:22.651 "superblock": false, 00:16:22.651 "num_base_bdevs": 4, 00:16:22.651 "num_base_bdevs_discovered": 2, 00:16:22.651 "num_base_bdevs_operational": 4, 00:16:22.651 "base_bdevs_list": [ 00:16:22.651 { 00:16:22.651 "name": "BaseBdev1", 00:16:22.651 "uuid": "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5", 00:16:22.651 "is_configured": true, 00:16:22.651 "data_offset": 0, 00:16:22.651 "data_size": 65536 00:16:22.651 }, 00:16:22.651 { 00:16:22.651 "name": "BaseBdev2", 00:16:22.651 "uuid": "4d84b303-0900-49ff-b858-564a7a71d5b5", 00:16:22.651 "is_configured": true, 00:16:22.651 "data_offset": 0, 00:16:22.651 "data_size": 65536 00:16:22.651 }, 00:16:22.651 { 00:16:22.651 "name": "BaseBdev3", 00:16:22.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.651 "is_configured": false, 00:16:22.651 "data_offset": 0, 00:16:22.651 "data_size": 0 00:16:22.651 }, 00:16:22.651 { 00:16:22.651 "name": "BaseBdev4", 00:16:22.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.651 "is_configured": false, 00:16:22.651 "data_offset": 0, 00:16:22.651 "data_size": 0 00:16:22.651 } 00:16:22.651 ] 00:16:22.651 }' 00:16:22.651 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.651 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.911 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:22.911 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.911 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.912 [2024-09-28 16:17:37.559005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.912 BaseBdev3 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.912 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.912 [ 00:16:22.912 { 00:16:22.912 "name": "BaseBdev3", 00:16:22.912 "aliases": [ 00:16:22.912 "61d57d5d-782a-49d0-8771-68387193a0b2" 00:16:22.912 ], 00:16:22.912 "product_name": "Malloc disk", 00:16:22.912 "block_size": 512, 00:16:22.912 "num_blocks": 65536, 00:16:22.912 "uuid": "61d57d5d-782a-49d0-8771-68387193a0b2", 00:16:22.912 "assigned_rate_limits": { 00:16:22.912 "rw_ios_per_sec": 0, 00:16:22.912 "rw_mbytes_per_sec": 0, 00:16:22.912 "r_mbytes_per_sec": 0, 00:16:22.912 "w_mbytes_per_sec": 0 00:16:22.912 }, 00:16:22.912 "claimed": true, 00:16:22.912 "claim_type": "exclusive_write", 00:16:22.912 "zoned": false, 00:16:22.912 "supported_io_types": { 00:16:22.912 "read": true, 00:16:22.912 "write": true, 00:16:22.912 "unmap": true, 00:16:22.912 "flush": true, 00:16:22.912 "reset": true, 00:16:22.912 "nvme_admin": false, 00:16:22.912 "nvme_io": false, 00:16:22.912 "nvme_io_md": false, 00:16:22.912 "write_zeroes": true, 00:16:22.912 "zcopy": true, 00:16:22.912 "get_zone_info": false, 00:16:22.912 "zone_management": false, 00:16:22.912 "zone_append": false, 00:16:22.912 "compare": false, 00:16:22.912 "compare_and_write": false, 00:16:22.912 "abort": true, 00:16:22.912 "seek_hole": false, 00:16:22.912 "seek_data": false, 00:16:22.912 "copy": true, 00:16:22.912 "nvme_iov_md": false 00:16:22.912 }, 00:16:22.912 "memory_domains": [ 00:16:22.912 { 00:16:22.912 "dma_device_id": "system", 00:16:22.912 "dma_device_type": 1 00:16:22.912 }, 00:16:22.912 { 00:16:22.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.172 "dma_device_type": 2 00:16:23.172 } 00:16:23.172 ], 00:16:23.172 "driver_specific": {} 00:16:23.172 } 00:16:23.172 ] 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.172 "name": "Existed_Raid", 00:16:23.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.172 "strip_size_kb": 64, 00:16:23.172 "state": "configuring", 00:16:23.172 "raid_level": "raid5f", 00:16:23.172 "superblock": false, 00:16:23.172 "num_base_bdevs": 4, 00:16:23.172 "num_base_bdevs_discovered": 3, 00:16:23.172 "num_base_bdevs_operational": 4, 00:16:23.172 "base_bdevs_list": [ 00:16:23.172 { 00:16:23.172 "name": "BaseBdev1", 00:16:23.172 "uuid": "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5", 00:16:23.172 "is_configured": true, 00:16:23.172 "data_offset": 0, 00:16:23.172 "data_size": 65536 00:16:23.172 }, 00:16:23.172 { 00:16:23.172 "name": "BaseBdev2", 00:16:23.172 "uuid": "4d84b303-0900-49ff-b858-564a7a71d5b5", 00:16:23.172 "is_configured": true, 00:16:23.172 "data_offset": 0, 00:16:23.172 "data_size": 65536 00:16:23.172 }, 00:16:23.172 { 00:16:23.172 "name": "BaseBdev3", 00:16:23.172 "uuid": "61d57d5d-782a-49d0-8771-68387193a0b2", 00:16:23.172 "is_configured": true, 00:16:23.172 "data_offset": 0, 00:16:23.172 "data_size": 65536 00:16:23.172 }, 00:16:23.172 { 00:16:23.172 "name": "BaseBdev4", 00:16:23.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.172 "is_configured": false, 00:16:23.172 "data_offset": 0, 00:16:23.172 "data_size": 0 00:16:23.172 } 00:16:23.172 ] 00:16:23.172 }' 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.172 16:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.433 [2024-09-28 16:17:38.074633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:23.433 [2024-09-28 16:17:38.074776] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:23.433 [2024-09-28 16:17:38.074806] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:23.433 [2024-09-28 16:17:38.075072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:23.433 [2024-09-28 16:17:38.082192] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:23.433 [2024-09-28 16:17:38.082260] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:23.433 [2024-09-28 16:17:38.082534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.433 BaseBdev4 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.433 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.433 [ 00:16:23.433 { 00:16:23.433 "name": "BaseBdev4", 00:16:23.433 "aliases": [ 00:16:23.433 "d38e9a58-fec1-418a-8ccd-0e9fabcb5b83" 00:16:23.433 ], 00:16:23.433 "product_name": "Malloc disk", 00:16:23.433 "block_size": 512, 00:16:23.433 "num_blocks": 65536, 00:16:23.433 "uuid": "d38e9a58-fec1-418a-8ccd-0e9fabcb5b83", 00:16:23.433 "assigned_rate_limits": { 00:16:23.433 "rw_ios_per_sec": 0, 00:16:23.433 "rw_mbytes_per_sec": 0, 00:16:23.433 "r_mbytes_per_sec": 0, 00:16:23.433 "w_mbytes_per_sec": 0 00:16:23.433 }, 00:16:23.433 "claimed": true, 00:16:23.433 "claim_type": "exclusive_write", 00:16:23.433 "zoned": false, 00:16:23.433 "supported_io_types": { 00:16:23.433 "read": true, 00:16:23.433 "write": true, 00:16:23.433 "unmap": true, 00:16:23.433 "flush": true, 00:16:23.433 "reset": true, 00:16:23.433 "nvme_admin": false, 00:16:23.434 "nvme_io": false, 00:16:23.434 "nvme_io_md": false, 00:16:23.434 "write_zeroes": true, 00:16:23.434 "zcopy": true, 00:16:23.434 "get_zone_info": false, 00:16:23.434 "zone_management": false, 00:16:23.434 "zone_append": false, 00:16:23.434 "compare": false, 00:16:23.434 "compare_and_write": false, 00:16:23.434 "abort": true, 00:16:23.434 "seek_hole": false, 00:16:23.434 "seek_data": false, 00:16:23.434 "copy": true, 00:16:23.434 "nvme_iov_md": false 00:16:23.434 }, 00:16:23.434 "memory_domains": [ 00:16:23.434 { 00:16:23.434 "dma_device_id": "system", 00:16:23.694 "dma_device_type": 1 00:16:23.694 }, 00:16:23.694 { 00:16:23.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.694 "dma_device_type": 2 00:16:23.694 } 00:16:23.694 ], 00:16:23.694 "driver_specific": {} 00:16:23.694 } 00:16:23.694 ] 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.694 "name": "Existed_Raid", 00:16:23.694 "uuid": "bf9219d6-e49c-4390-9b84-df4364f57e00", 00:16:23.694 "strip_size_kb": 64, 00:16:23.694 "state": "online", 00:16:23.694 "raid_level": "raid5f", 00:16:23.694 "superblock": false, 00:16:23.694 "num_base_bdevs": 4, 00:16:23.694 "num_base_bdevs_discovered": 4, 00:16:23.694 "num_base_bdevs_operational": 4, 00:16:23.694 "base_bdevs_list": [ 00:16:23.694 { 00:16:23.694 "name": "BaseBdev1", 00:16:23.694 "uuid": "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5", 00:16:23.694 "is_configured": true, 00:16:23.694 "data_offset": 0, 00:16:23.694 "data_size": 65536 00:16:23.694 }, 00:16:23.694 { 00:16:23.694 "name": "BaseBdev2", 00:16:23.694 "uuid": "4d84b303-0900-49ff-b858-564a7a71d5b5", 00:16:23.694 "is_configured": true, 00:16:23.694 "data_offset": 0, 00:16:23.694 "data_size": 65536 00:16:23.694 }, 00:16:23.694 { 00:16:23.694 "name": "BaseBdev3", 00:16:23.694 "uuid": "61d57d5d-782a-49d0-8771-68387193a0b2", 00:16:23.694 "is_configured": true, 00:16:23.694 "data_offset": 0, 00:16:23.694 "data_size": 65536 00:16:23.694 }, 00:16:23.694 { 00:16:23.694 "name": "BaseBdev4", 00:16:23.694 "uuid": "d38e9a58-fec1-418a-8ccd-0e9fabcb5b83", 00:16:23.694 "is_configured": true, 00:16:23.694 "data_offset": 0, 00:16:23.694 "data_size": 65536 00:16:23.694 } 00:16:23.694 ] 00:16:23.694 }' 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.694 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.954 [2024-09-28 16:17:38.553553] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.954 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:23.954 "name": "Existed_Raid", 00:16:23.954 "aliases": [ 00:16:23.954 "bf9219d6-e49c-4390-9b84-df4364f57e00" 00:16:23.954 ], 00:16:23.954 "product_name": "Raid Volume", 00:16:23.954 "block_size": 512, 00:16:23.954 "num_blocks": 196608, 00:16:23.954 "uuid": "bf9219d6-e49c-4390-9b84-df4364f57e00", 00:16:23.954 "assigned_rate_limits": { 00:16:23.954 "rw_ios_per_sec": 0, 00:16:23.954 "rw_mbytes_per_sec": 0, 00:16:23.954 "r_mbytes_per_sec": 0, 00:16:23.954 "w_mbytes_per_sec": 0 00:16:23.954 }, 00:16:23.954 "claimed": false, 00:16:23.954 "zoned": false, 00:16:23.954 "supported_io_types": { 00:16:23.954 "read": true, 00:16:23.954 "write": true, 00:16:23.954 "unmap": false, 00:16:23.954 "flush": false, 00:16:23.954 "reset": true, 00:16:23.954 "nvme_admin": false, 00:16:23.954 "nvme_io": false, 00:16:23.954 "nvme_io_md": false, 00:16:23.954 "write_zeroes": true, 00:16:23.954 "zcopy": false, 00:16:23.954 "get_zone_info": false, 00:16:23.954 "zone_management": false, 00:16:23.954 "zone_append": false, 00:16:23.954 "compare": false, 00:16:23.954 "compare_and_write": false, 00:16:23.954 "abort": false, 00:16:23.954 "seek_hole": false, 00:16:23.954 "seek_data": false, 00:16:23.954 "copy": false, 00:16:23.954 "nvme_iov_md": false 00:16:23.954 }, 00:16:23.954 "driver_specific": { 00:16:23.954 "raid": { 00:16:23.954 "uuid": "bf9219d6-e49c-4390-9b84-df4364f57e00", 00:16:23.954 "strip_size_kb": 64, 00:16:23.954 "state": "online", 00:16:23.954 "raid_level": "raid5f", 00:16:23.954 "superblock": false, 00:16:23.954 "num_base_bdevs": 4, 00:16:23.954 "num_base_bdevs_discovered": 4, 00:16:23.954 "num_base_bdevs_operational": 4, 00:16:23.954 "base_bdevs_list": [ 00:16:23.954 { 00:16:23.954 "name": "BaseBdev1", 00:16:23.954 "uuid": "50d7a29c-7e82-46b1-9bd3-18d5b8fc84a5", 00:16:23.954 "is_configured": true, 00:16:23.954 "data_offset": 0, 00:16:23.954 "data_size": 65536 00:16:23.954 }, 00:16:23.954 { 00:16:23.954 "name": "BaseBdev2", 00:16:23.954 "uuid": "4d84b303-0900-49ff-b858-564a7a71d5b5", 00:16:23.954 "is_configured": true, 00:16:23.954 "data_offset": 0, 00:16:23.954 "data_size": 65536 00:16:23.954 }, 00:16:23.954 { 00:16:23.954 "name": "BaseBdev3", 00:16:23.954 "uuid": "61d57d5d-782a-49d0-8771-68387193a0b2", 00:16:23.954 "is_configured": true, 00:16:23.954 "data_offset": 0, 00:16:23.954 "data_size": 65536 00:16:23.954 }, 00:16:23.954 { 00:16:23.954 "name": "BaseBdev4", 00:16:23.954 "uuid": "d38e9a58-fec1-418a-8ccd-0e9fabcb5b83", 00:16:23.954 "is_configured": true, 00:16:23.954 "data_offset": 0, 00:16:23.955 "data_size": 65536 00:16:23.955 } 00:16:23.955 ] 00:16:23.955 } 00:16:23.955 } 00:16:23.955 }' 00:16:23.955 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:24.215 BaseBdev2 00:16:24.215 BaseBdev3 00:16:24.215 BaseBdev4' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.215 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.476 [2024-09-28 16:17:38.904841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.476 16:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.476 "name": "Existed_Raid", 00:16:24.476 "uuid": "bf9219d6-e49c-4390-9b84-df4364f57e00", 00:16:24.476 "strip_size_kb": 64, 00:16:24.476 "state": "online", 00:16:24.476 "raid_level": "raid5f", 00:16:24.476 "superblock": false, 00:16:24.476 "num_base_bdevs": 4, 00:16:24.476 "num_base_bdevs_discovered": 3, 00:16:24.476 "num_base_bdevs_operational": 3, 00:16:24.476 "base_bdevs_list": [ 00:16:24.476 { 00:16:24.476 "name": null, 00:16:24.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.476 "is_configured": false, 00:16:24.476 "data_offset": 0, 00:16:24.476 "data_size": 65536 00:16:24.476 }, 00:16:24.476 { 00:16:24.476 "name": "BaseBdev2", 00:16:24.476 "uuid": "4d84b303-0900-49ff-b858-564a7a71d5b5", 00:16:24.476 "is_configured": true, 00:16:24.476 "data_offset": 0, 00:16:24.476 "data_size": 65536 00:16:24.476 }, 00:16:24.476 { 00:16:24.476 "name": "BaseBdev3", 00:16:24.476 "uuid": "61d57d5d-782a-49d0-8771-68387193a0b2", 00:16:24.476 "is_configured": true, 00:16:24.476 "data_offset": 0, 00:16:24.476 "data_size": 65536 00:16:24.476 }, 00:16:24.476 { 00:16:24.476 "name": "BaseBdev4", 00:16:24.476 "uuid": "d38e9a58-fec1-418a-8ccd-0e9fabcb5b83", 00:16:24.476 "is_configured": true, 00:16:24.476 "data_offset": 0, 00:16:24.476 "data_size": 65536 00:16:24.476 } 00:16:24.476 ] 00:16:24.476 }' 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.476 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 [2024-09-28 16:17:39.500745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:25.046 [2024-09-28 16:17:39.500842] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.046 [2024-09-28 16:17:39.587470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.046 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.047 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.047 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.047 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:25.047 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.047 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:25.047 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.047 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.047 [2024-09-28 16:17:39.647390] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.307 [2024-09-28 16:17:39.794974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:25.307 [2024-09-28 16:17:39.795029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.307 BaseBdev2 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.307 16:17:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.568 [ 00:16:25.568 { 00:16:25.568 "name": "BaseBdev2", 00:16:25.568 "aliases": [ 00:16:25.568 "ab360377-196a-4dbd-b20f-07220c43054e" 00:16:25.568 ], 00:16:25.568 "product_name": "Malloc disk", 00:16:25.568 "block_size": 512, 00:16:25.568 "num_blocks": 65536, 00:16:25.568 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:25.568 "assigned_rate_limits": { 00:16:25.568 "rw_ios_per_sec": 0, 00:16:25.568 "rw_mbytes_per_sec": 0, 00:16:25.568 "r_mbytes_per_sec": 0, 00:16:25.568 "w_mbytes_per_sec": 0 00:16:25.568 }, 00:16:25.568 "claimed": false, 00:16:25.568 "zoned": false, 00:16:25.568 "supported_io_types": { 00:16:25.568 "read": true, 00:16:25.568 "write": true, 00:16:25.568 "unmap": true, 00:16:25.568 "flush": true, 00:16:25.568 "reset": true, 00:16:25.568 "nvme_admin": false, 00:16:25.568 "nvme_io": false, 00:16:25.568 "nvme_io_md": false, 00:16:25.568 "write_zeroes": true, 00:16:25.568 "zcopy": true, 00:16:25.568 "get_zone_info": false, 00:16:25.568 "zone_management": false, 00:16:25.568 "zone_append": false, 00:16:25.568 "compare": false, 00:16:25.568 "compare_and_write": false, 00:16:25.568 "abort": true, 00:16:25.568 "seek_hole": false, 00:16:25.568 "seek_data": false, 00:16:25.568 "copy": true, 00:16:25.568 "nvme_iov_md": false 00:16:25.568 }, 00:16:25.568 "memory_domains": [ 00:16:25.568 { 00:16:25.568 "dma_device_id": "system", 00:16:25.568 "dma_device_type": 1 00:16:25.568 }, 00:16:25.568 { 00:16:25.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.568 "dma_device_type": 2 00:16:25.568 } 00:16:25.568 ], 00:16:25.568 "driver_specific": {} 00:16:25.568 } 00:16:25.568 ] 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.568 BaseBdev3 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.568 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.568 [ 00:16:25.568 { 00:16:25.568 "name": "BaseBdev3", 00:16:25.568 "aliases": [ 00:16:25.568 "2536f563-0605-47a2-886c-04fdb3150b66" 00:16:25.568 ], 00:16:25.568 "product_name": "Malloc disk", 00:16:25.568 "block_size": 512, 00:16:25.568 "num_blocks": 65536, 00:16:25.568 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:25.568 "assigned_rate_limits": { 00:16:25.568 "rw_ios_per_sec": 0, 00:16:25.568 "rw_mbytes_per_sec": 0, 00:16:25.568 "r_mbytes_per_sec": 0, 00:16:25.568 "w_mbytes_per_sec": 0 00:16:25.568 }, 00:16:25.568 "claimed": false, 00:16:25.568 "zoned": false, 00:16:25.569 "supported_io_types": { 00:16:25.569 "read": true, 00:16:25.569 "write": true, 00:16:25.569 "unmap": true, 00:16:25.569 "flush": true, 00:16:25.569 "reset": true, 00:16:25.569 "nvme_admin": false, 00:16:25.569 "nvme_io": false, 00:16:25.569 "nvme_io_md": false, 00:16:25.569 "write_zeroes": true, 00:16:25.569 "zcopy": true, 00:16:25.569 "get_zone_info": false, 00:16:25.569 "zone_management": false, 00:16:25.569 "zone_append": false, 00:16:25.569 "compare": false, 00:16:25.569 "compare_and_write": false, 00:16:25.569 "abort": true, 00:16:25.569 "seek_hole": false, 00:16:25.569 "seek_data": false, 00:16:25.569 "copy": true, 00:16:25.569 "nvme_iov_md": false 00:16:25.569 }, 00:16:25.569 "memory_domains": [ 00:16:25.569 { 00:16:25.569 "dma_device_id": "system", 00:16:25.569 "dma_device_type": 1 00:16:25.569 }, 00:16:25.569 { 00:16:25.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.569 "dma_device_type": 2 00:16:25.569 } 00:16:25.569 ], 00:16:25.569 "driver_specific": {} 00:16:25.569 } 00:16:25.569 ] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.569 BaseBdev4 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.569 [ 00:16:25.569 { 00:16:25.569 "name": "BaseBdev4", 00:16:25.569 "aliases": [ 00:16:25.569 "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4" 00:16:25.569 ], 00:16:25.569 "product_name": "Malloc disk", 00:16:25.569 "block_size": 512, 00:16:25.569 "num_blocks": 65536, 00:16:25.569 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:25.569 "assigned_rate_limits": { 00:16:25.569 "rw_ios_per_sec": 0, 00:16:25.569 "rw_mbytes_per_sec": 0, 00:16:25.569 "r_mbytes_per_sec": 0, 00:16:25.569 "w_mbytes_per_sec": 0 00:16:25.569 }, 00:16:25.569 "claimed": false, 00:16:25.569 "zoned": false, 00:16:25.569 "supported_io_types": { 00:16:25.569 "read": true, 00:16:25.569 "write": true, 00:16:25.569 "unmap": true, 00:16:25.569 "flush": true, 00:16:25.569 "reset": true, 00:16:25.569 "nvme_admin": false, 00:16:25.569 "nvme_io": false, 00:16:25.569 "nvme_io_md": false, 00:16:25.569 "write_zeroes": true, 00:16:25.569 "zcopy": true, 00:16:25.569 "get_zone_info": false, 00:16:25.569 "zone_management": false, 00:16:25.569 "zone_append": false, 00:16:25.569 "compare": false, 00:16:25.569 "compare_and_write": false, 00:16:25.569 "abort": true, 00:16:25.569 "seek_hole": false, 00:16:25.569 "seek_data": false, 00:16:25.569 "copy": true, 00:16:25.569 "nvme_iov_md": false 00:16:25.569 }, 00:16:25.569 "memory_domains": [ 00:16:25.569 { 00:16:25.569 "dma_device_id": "system", 00:16:25.569 "dma_device_type": 1 00:16:25.569 }, 00:16:25.569 { 00:16:25.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.569 "dma_device_type": 2 00:16:25.569 } 00:16:25.569 ], 00:16:25.569 "driver_specific": {} 00:16:25.569 } 00:16:25.569 ] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.569 [2024-09-28 16:17:40.170694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.569 [2024-09-28 16:17:40.170823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.569 [2024-09-28 16:17:40.170863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.569 [2024-09-28 16:17:40.172640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.569 [2024-09-28 16:17:40.172723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.569 "name": "Existed_Raid", 00:16:25.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.569 "strip_size_kb": 64, 00:16:25.569 "state": "configuring", 00:16:25.569 "raid_level": "raid5f", 00:16:25.569 "superblock": false, 00:16:25.569 "num_base_bdevs": 4, 00:16:25.569 "num_base_bdevs_discovered": 3, 00:16:25.569 "num_base_bdevs_operational": 4, 00:16:25.569 "base_bdevs_list": [ 00:16:25.569 { 00:16:25.569 "name": "BaseBdev1", 00:16:25.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.569 "is_configured": false, 00:16:25.569 "data_offset": 0, 00:16:25.569 "data_size": 0 00:16:25.569 }, 00:16:25.569 { 00:16:25.569 "name": "BaseBdev2", 00:16:25.569 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:25.569 "is_configured": true, 00:16:25.569 "data_offset": 0, 00:16:25.569 "data_size": 65536 00:16:25.569 }, 00:16:25.569 { 00:16:25.569 "name": "BaseBdev3", 00:16:25.569 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:25.569 "is_configured": true, 00:16:25.569 "data_offset": 0, 00:16:25.569 "data_size": 65536 00:16:25.569 }, 00:16:25.569 { 00:16:25.569 "name": "BaseBdev4", 00:16:25.569 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:25.569 "is_configured": true, 00:16:25.569 "data_offset": 0, 00:16:25.569 "data_size": 65536 00:16:25.569 } 00:16:25.569 ] 00:16:25.569 }' 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.569 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.140 [2024-09-28 16:17:40.633909] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.140 "name": "Existed_Raid", 00:16:26.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.140 "strip_size_kb": 64, 00:16:26.140 "state": "configuring", 00:16:26.140 "raid_level": "raid5f", 00:16:26.140 "superblock": false, 00:16:26.140 "num_base_bdevs": 4, 00:16:26.140 "num_base_bdevs_discovered": 2, 00:16:26.140 "num_base_bdevs_operational": 4, 00:16:26.140 "base_bdevs_list": [ 00:16:26.140 { 00:16:26.140 "name": "BaseBdev1", 00:16:26.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.140 "is_configured": false, 00:16:26.140 "data_offset": 0, 00:16:26.140 "data_size": 0 00:16:26.140 }, 00:16:26.140 { 00:16:26.140 "name": null, 00:16:26.140 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:26.140 "is_configured": false, 00:16:26.140 "data_offset": 0, 00:16:26.140 "data_size": 65536 00:16:26.140 }, 00:16:26.140 { 00:16:26.140 "name": "BaseBdev3", 00:16:26.140 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:26.140 "is_configured": true, 00:16:26.140 "data_offset": 0, 00:16:26.140 "data_size": 65536 00:16:26.140 }, 00:16:26.140 { 00:16:26.140 "name": "BaseBdev4", 00:16:26.140 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:26.140 "is_configured": true, 00:16:26.140 "data_offset": 0, 00:16:26.140 "data_size": 65536 00:16:26.140 } 00:16:26.140 ] 00:16:26.140 }' 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.140 16:17:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.400 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.400 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.660 [2024-09-28 16:17:41.119802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.660 BaseBdev1 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.660 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.660 [ 00:16:26.660 { 00:16:26.660 "name": "BaseBdev1", 00:16:26.660 "aliases": [ 00:16:26.660 "1e63cb41-99ff-400b-b5ba-c680d5561791" 00:16:26.660 ], 00:16:26.660 "product_name": "Malloc disk", 00:16:26.660 "block_size": 512, 00:16:26.660 "num_blocks": 65536, 00:16:26.660 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:26.660 "assigned_rate_limits": { 00:16:26.660 "rw_ios_per_sec": 0, 00:16:26.660 "rw_mbytes_per_sec": 0, 00:16:26.660 "r_mbytes_per_sec": 0, 00:16:26.660 "w_mbytes_per_sec": 0 00:16:26.660 }, 00:16:26.660 "claimed": true, 00:16:26.660 "claim_type": "exclusive_write", 00:16:26.660 "zoned": false, 00:16:26.660 "supported_io_types": { 00:16:26.660 "read": true, 00:16:26.660 "write": true, 00:16:26.660 "unmap": true, 00:16:26.660 "flush": true, 00:16:26.660 "reset": true, 00:16:26.660 "nvme_admin": false, 00:16:26.660 "nvme_io": false, 00:16:26.660 "nvme_io_md": false, 00:16:26.660 "write_zeroes": true, 00:16:26.660 "zcopy": true, 00:16:26.660 "get_zone_info": false, 00:16:26.660 "zone_management": false, 00:16:26.660 "zone_append": false, 00:16:26.660 "compare": false, 00:16:26.660 "compare_and_write": false, 00:16:26.660 "abort": true, 00:16:26.660 "seek_hole": false, 00:16:26.660 "seek_data": false, 00:16:26.660 "copy": true, 00:16:26.661 "nvme_iov_md": false 00:16:26.661 }, 00:16:26.661 "memory_domains": [ 00:16:26.661 { 00:16:26.661 "dma_device_id": "system", 00:16:26.661 "dma_device_type": 1 00:16:26.661 }, 00:16:26.661 { 00:16:26.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.661 "dma_device_type": 2 00:16:26.661 } 00:16:26.661 ], 00:16:26.661 "driver_specific": {} 00:16:26.661 } 00:16:26.661 ] 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.661 "name": "Existed_Raid", 00:16:26.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.661 "strip_size_kb": 64, 00:16:26.661 "state": "configuring", 00:16:26.661 "raid_level": "raid5f", 00:16:26.661 "superblock": false, 00:16:26.661 "num_base_bdevs": 4, 00:16:26.661 "num_base_bdevs_discovered": 3, 00:16:26.661 "num_base_bdevs_operational": 4, 00:16:26.661 "base_bdevs_list": [ 00:16:26.661 { 00:16:26.661 "name": "BaseBdev1", 00:16:26.661 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:26.661 "is_configured": true, 00:16:26.661 "data_offset": 0, 00:16:26.661 "data_size": 65536 00:16:26.661 }, 00:16:26.661 { 00:16:26.661 "name": null, 00:16:26.661 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:26.661 "is_configured": false, 00:16:26.661 "data_offset": 0, 00:16:26.661 "data_size": 65536 00:16:26.661 }, 00:16:26.661 { 00:16:26.661 "name": "BaseBdev3", 00:16:26.661 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:26.661 "is_configured": true, 00:16:26.661 "data_offset": 0, 00:16:26.661 "data_size": 65536 00:16:26.661 }, 00:16:26.661 { 00:16:26.661 "name": "BaseBdev4", 00:16:26.661 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:26.661 "is_configured": true, 00:16:26.661 "data_offset": 0, 00:16:26.661 "data_size": 65536 00:16:26.661 } 00:16:26.661 ] 00:16:26.661 }' 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.661 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.231 [2024-09-28 16:17:41.675088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.231 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.231 "name": "Existed_Raid", 00:16:27.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.231 "strip_size_kb": 64, 00:16:27.232 "state": "configuring", 00:16:27.232 "raid_level": "raid5f", 00:16:27.232 "superblock": false, 00:16:27.232 "num_base_bdevs": 4, 00:16:27.232 "num_base_bdevs_discovered": 2, 00:16:27.232 "num_base_bdevs_operational": 4, 00:16:27.232 "base_bdevs_list": [ 00:16:27.232 { 00:16:27.232 "name": "BaseBdev1", 00:16:27.232 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:27.232 "is_configured": true, 00:16:27.232 "data_offset": 0, 00:16:27.232 "data_size": 65536 00:16:27.232 }, 00:16:27.232 { 00:16:27.232 "name": null, 00:16:27.232 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:27.232 "is_configured": false, 00:16:27.232 "data_offset": 0, 00:16:27.232 "data_size": 65536 00:16:27.232 }, 00:16:27.232 { 00:16:27.232 "name": null, 00:16:27.232 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:27.232 "is_configured": false, 00:16:27.232 "data_offset": 0, 00:16:27.232 "data_size": 65536 00:16:27.232 }, 00:16:27.232 { 00:16:27.232 "name": "BaseBdev4", 00:16:27.232 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:27.232 "is_configured": true, 00:16:27.232 "data_offset": 0, 00:16:27.232 "data_size": 65536 00:16:27.232 } 00:16:27.232 ] 00:16:27.232 }' 00:16:27.232 16:17:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.232 16:17:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.492 [2024-09-28 16:17:42.162311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.492 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.752 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.752 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.752 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.752 "name": "Existed_Raid", 00:16:27.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.752 "strip_size_kb": 64, 00:16:27.752 "state": "configuring", 00:16:27.752 "raid_level": "raid5f", 00:16:27.752 "superblock": false, 00:16:27.752 "num_base_bdevs": 4, 00:16:27.752 "num_base_bdevs_discovered": 3, 00:16:27.752 "num_base_bdevs_operational": 4, 00:16:27.752 "base_bdevs_list": [ 00:16:27.752 { 00:16:27.752 "name": "BaseBdev1", 00:16:27.752 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:27.752 "is_configured": true, 00:16:27.752 "data_offset": 0, 00:16:27.752 "data_size": 65536 00:16:27.752 }, 00:16:27.752 { 00:16:27.752 "name": null, 00:16:27.752 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:27.752 "is_configured": false, 00:16:27.752 "data_offset": 0, 00:16:27.752 "data_size": 65536 00:16:27.752 }, 00:16:27.752 { 00:16:27.752 "name": "BaseBdev3", 00:16:27.752 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:27.752 "is_configured": true, 00:16:27.752 "data_offset": 0, 00:16:27.752 "data_size": 65536 00:16:27.752 }, 00:16:27.752 { 00:16:27.752 "name": "BaseBdev4", 00:16:27.752 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:27.752 "is_configured": true, 00:16:27.752 "data_offset": 0, 00:16:27.752 "data_size": 65536 00:16:27.752 } 00:16:27.752 ] 00:16:27.752 }' 00:16:27.752 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.752 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.012 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.012 [2024-09-28 16:17:42.637477] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.272 "name": "Existed_Raid", 00:16:28.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.272 "strip_size_kb": 64, 00:16:28.272 "state": "configuring", 00:16:28.272 "raid_level": "raid5f", 00:16:28.272 "superblock": false, 00:16:28.272 "num_base_bdevs": 4, 00:16:28.272 "num_base_bdevs_discovered": 2, 00:16:28.272 "num_base_bdevs_operational": 4, 00:16:28.272 "base_bdevs_list": [ 00:16:28.272 { 00:16:28.272 "name": null, 00:16:28.272 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:28.272 "is_configured": false, 00:16:28.272 "data_offset": 0, 00:16:28.272 "data_size": 65536 00:16:28.272 }, 00:16:28.272 { 00:16:28.272 "name": null, 00:16:28.272 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:28.272 "is_configured": false, 00:16:28.272 "data_offset": 0, 00:16:28.272 "data_size": 65536 00:16:28.272 }, 00:16:28.272 { 00:16:28.272 "name": "BaseBdev3", 00:16:28.272 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:28.272 "is_configured": true, 00:16:28.272 "data_offset": 0, 00:16:28.272 "data_size": 65536 00:16:28.272 }, 00:16:28.272 { 00:16:28.272 "name": "BaseBdev4", 00:16:28.272 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:28.272 "is_configured": true, 00:16:28.272 "data_offset": 0, 00:16:28.272 "data_size": 65536 00:16:28.272 } 00:16:28.272 ] 00:16:28.272 }' 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.272 16:17:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.532 [2024-09-28 16:17:43.183741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.532 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.792 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.792 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.792 "name": "Existed_Raid", 00:16:28.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.792 "strip_size_kb": 64, 00:16:28.792 "state": "configuring", 00:16:28.792 "raid_level": "raid5f", 00:16:28.792 "superblock": false, 00:16:28.792 "num_base_bdevs": 4, 00:16:28.792 "num_base_bdevs_discovered": 3, 00:16:28.792 "num_base_bdevs_operational": 4, 00:16:28.792 "base_bdevs_list": [ 00:16:28.792 { 00:16:28.792 "name": null, 00:16:28.792 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:28.792 "is_configured": false, 00:16:28.792 "data_offset": 0, 00:16:28.792 "data_size": 65536 00:16:28.792 }, 00:16:28.792 { 00:16:28.792 "name": "BaseBdev2", 00:16:28.792 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:28.792 "is_configured": true, 00:16:28.792 "data_offset": 0, 00:16:28.792 "data_size": 65536 00:16:28.792 }, 00:16:28.792 { 00:16:28.792 "name": "BaseBdev3", 00:16:28.792 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:28.792 "is_configured": true, 00:16:28.792 "data_offset": 0, 00:16:28.792 "data_size": 65536 00:16:28.792 }, 00:16:28.792 { 00:16:28.792 "name": "BaseBdev4", 00:16:28.792 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:28.792 "is_configured": true, 00:16:28.792 "data_offset": 0, 00:16:28.792 "data_size": 65536 00:16:28.792 } 00:16:28.792 ] 00:16:28.792 }' 00:16:28.792 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.792 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e63cb41-99ff-400b-b5ba-c680d5561791 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.051 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.310 [2024-09-28 16:17:43.750109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:29.310 [2024-09-28 16:17:43.750238] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:29.310 [2024-09-28 16:17:43.750264] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:29.310 [2024-09-28 16:17:43.750533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:29.310 [2024-09-28 16:17:43.757429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:29.310 [2024-09-28 16:17:43.757480] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:29.310 [2024-09-28 16:17:43.757751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.310 NewBaseBdev 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.310 [ 00:16:29.310 { 00:16:29.310 "name": "NewBaseBdev", 00:16:29.310 "aliases": [ 00:16:29.310 "1e63cb41-99ff-400b-b5ba-c680d5561791" 00:16:29.310 ], 00:16:29.310 "product_name": "Malloc disk", 00:16:29.310 "block_size": 512, 00:16:29.310 "num_blocks": 65536, 00:16:29.310 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:29.310 "assigned_rate_limits": { 00:16:29.310 "rw_ios_per_sec": 0, 00:16:29.310 "rw_mbytes_per_sec": 0, 00:16:29.310 "r_mbytes_per_sec": 0, 00:16:29.310 "w_mbytes_per_sec": 0 00:16:29.310 }, 00:16:29.310 "claimed": true, 00:16:29.310 "claim_type": "exclusive_write", 00:16:29.310 "zoned": false, 00:16:29.310 "supported_io_types": { 00:16:29.310 "read": true, 00:16:29.310 "write": true, 00:16:29.310 "unmap": true, 00:16:29.310 "flush": true, 00:16:29.310 "reset": true, 00:16:29.310 "nvme_admin": false, 00:16:29.310 "nvme_io": false, 00:16:29.310 "nvme_io_md": false, 00:16:29.310 "write_zeroes": true, 00:16:29.310 "zcopy": true, 00:16:29.310 "get_zone_info": false, 00:16:29.310 "zone_management": false, 00:16:29.310 "zone_append": false, 00:16:29.310 "compare": false, 00:16:29.310 "compare_and_write": false, 00:16:29.310 "abort": true, 00:16:29.310 "seek_hole": false, 00:16:29.310 "seek_data": false, 00:16:29.310 "copy": true, 00:16:29.310 "nvme_iov_md": false 00:16:29.310 }, 00:16:29.310 "memory_domains": [ 00:16:29.310 { 00:16:29.310 "dma_device_id": "system", 00:16:29.310 "dma_device_type": 1 00:16:29.310 }, 00:16:29.310 { 00:16:29.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.310 "dma_device_type": 2 00:16:29.310 } 00:16:29.310 ], 00:16:29.310 "driver_specific": {} 00:16:29.310 } 00:16:29.310 ] 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.310 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.311 "name": "Existed_Raid", 00:16:29.311 "uuid": "5611aa4c-cf5e-4144-9caa-fb48e5fc7dab", 00:16:29.311 "strip_size_kb": 64, 00:16:29.311 "state": "online", 00:16:29.311 "raid_level": "raid5f", 00:16:29.311 "superblock": false, 00:16:29.311 "num_base_bdevs": 4, 00:16:29.311 "num_base_bdevs_discovered": 4, 00:16:29.311 "num_base_bdevs_operational": 4, 00:16:29.311 "base_bdevs_list": [ 00:16:29.311 { 00:16:29.311 "name": "NewBaseBdev", 00:16:29.311 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:29.311 "is_configured": true, 00:16:29.311 "data_offset": 0, 00:16:29.311 "data_size": 65536 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "name": "BaseBdev2", 00:16:29.311 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:29.311 "is_configured": true, 00:16:29.311 "data_offset": 0, 00:16:29.311 "data_size": 65536 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "name": "BaseBdev3", 00:16:29.311 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:29.311 "is_configured": true, 00:16:29.311 "data_offset": 0, 00:16:29.311 "data_size": 65536 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "name": "BaseBdev4", 00:16:29.311 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:29.311 "is_configured": true, 00:16:29.311 "data_offset": 0, 00:16:29.311 "data_size": 65536 00:16:29.311 } 00:16:29.311 ] 00:16:29.311 }' 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.311 16:17:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.571 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.571 [2024-09-28 16:17:44.248849] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.832 "name": "Existed_Raid", 00:16:29.832 "aliases": [ 00:16:29.832 "5611aa4c-cf5e-4144-9caa-fb48e5fc7dab" 00:16:29.832 ], 00:16:29.832 "product_name": "Raid Volume", 00:16:29.832 "block_size": 512, 00:16:29.832 "num_blocks": 196608, 00:16:29.832 "uuid": "5611aa4c-cf5e-4144-9caa-fb48e5fc7dab", 00:16:29.832 "assigned_rate_limits": { 00:16:29.832 "rw_ios_per_sec": 0, 00:16:29.832 "rw_mbytes_per_sec": 0, 00:16:29.832 "r_mbytes_per_sec": 0, 00:16:29.832 "w_mbytes_per_sec": 0 00:16:29.832 }, 00:16:29.832 "claimed": false, 00:16:29.832 "zoned": false, 00:16:29.832 "supported_io_types": { 00:16:29.832 "read": true, 00:16:29.832 "write": true, 00:16:29.832 "unmap": false, 00:16:29.832 "flush": false, 00:16:29.832 "reset": true, 00:16:29.832 "nvme_admin": false, 00:16:29.832 "nvme_io": false, 00:16:29.832 "nvme_io_md": false, 00:16:29.832 "write_zeroes": true, 00:16:29.832 "zcopy": false, 00:16:29.832 "get_zone_info": false, 00:16:29.832 "zone_management": false, 00:16:29.832 "zone_append": false, 00:16:29.832 "compare": false, 00:16:29.832 "compare_and_write": false, 00:16:29.832 "abort": false, 00:16:29.832 "seek_hole": false, 00:16:29.832 "seek_data": false, 00:16:29.832 "copy": false, 00:16:29.832 "nvme_iov_md": false 00:16:29.832 }, 00:16:29.832 "driver_specific": { 00:16:29.832 "raid": { 00:16:29.832 "uuid": "5611aa4c-cf5e-4144-9caa-fb48e5fc7dab", 00:16:29.832 "strip_size_kb": 64, 00:16:29.832 "state": "online", 00:16:29.832 "raid_level": "raid5f", 00:16:29.832 "superblock": false, 00:16:29.832 "num_base_bdevs": 4, 00:16:29.832 "num_base_bdevs_discovered": 4, 00:16:29.832 "num_base_bdevs_operational": 4, 00:16:29.832 "base_bdevs_list": [ 00:16:29.832 { 00:16:29.832 "name": "NewBaseBdev", 00:16:29.832 "uuid": "1e63cb41-99ff-400b-b5ba-c680d5561791", 00:16:29.832 "is_configured": true, 00:16:29.832 "data_offset": 0, 00:16:29.832 "data_size": 65536 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "BaseBdev2", 00:16:29.832 "uuid": "ab360377-196a-4dbd-b20f-07220c43054e", 00:16:29.832 "is_configured": true, 00:16:29.832 "data_offset": 0, 00:16:29.832 "data_size": 65536 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "BaseBdev3", 00:16:29.832 "uuid": "2536f563-0605-47a2-886c-04fdb3150b66", 00:16:29.832 "is_configured": true, 00:16:29.832 "data_offset": 0, 00:16:29.832 "data_size": 65536 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "BaseBdev4", 00:16:29.832 "uuid": "2d484f9e-b1cf-4c08-86e1-fba1a4317ed4", 00:16:29.832 "is_configured": true, 00:16:29.832 "data_offset": 0, 00:16:29.832 "data_size": 65536 00:16:29.832 } 00:16:29.832 ] 00:16:29.832 } 00:16:29.832 } 00:16:29.832 }' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:29.832 BaseBdev2 00:16:29.832 BaseBdev3 00:16:29.832 BaseBdev4' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.832 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.092 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.092 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.092 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.092 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.092 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.092 [2024-09-28 16:17:44.548152] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.093 [2024-09-28 16:17:44.548178] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.093 [2024-09-28 16:17:44.548251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.093 [2024-09-28 16:17:44.548524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.093 [2024-09-28 16:17:44.548534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82775 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82775 ']' 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82775 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82775 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82775' 00:16:30.093 killing process with pid 82775 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 82775 00:16:30.093 [2024-09-28 16:17:44.597783] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.093 16:17:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 82775 00:16:30.353 [2024-09-28 16:17:44.960951] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:31.735 00:16:31.735 real 0m11.674s 00:16:31.735 user 0m18.483s 00:16:31.735 sys 0m2.254s 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.735 ************************************ 00:16:31.735 END TEST raid5f_state_function_test 00:16:31.735 ************************************ 00:16:31.735 16:17:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:31.735 16:17:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:31.735 16:17:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.735 16:17:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.735 ************************************ 00:16:31.735 START TEST raid5f_state_function_test_sb 00:16:31.735 ************************************ 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:31.735 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83442 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:31.736 Process raid pid: 83442 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83442' 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83442 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83442 ']' 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.736 16:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.736 [2024-09-28 16:17:46.325014] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:31.736 [2024-09-28 16:17:46.325126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.996 [2024-09-28 16:17:46.489490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.257 [2024-09-28 16:17:46.694408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.257 [2024-09-28 16:17:46.888417] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.257 [2024-09-28 16:17:46.888454] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.518 [2024-09-28 16:17:47.138916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.518 [2024-09-28 16:17:47.138974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.518 [2024-09-28 16:17:47.138984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.518 [2024-09-28 16:17:47.138993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.518 [2024-09-28 16:17:47.138998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.518 [2024-09-28 16:17:47.139008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.518 [2024-09-28 16:17:47.139014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:32.518 [2024-09-28 16:17:47.139022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.518 "name": "Existed_Raid", 00:16:32.518 "uuid": "b7575971-fc37-4408-a8fd-6b121afaf73e", 00:16:32.518 "strip_size_kb": 64, 00:16:32.518 "state": "configuring", 00:16:32.518 "raid_level": "raid5f", 00:16:32.518 "superblock": true, 00:16:32.518 "num_base_bdevs": 4, 00:16:32.518 "num_base_bdevs_discovered": 0, 00:16:32.518 "num_base_bdevs_operational": 4, 00:16:32.518 "base_bdevs_list": [ 00:16:32.518 { 00:16:32.518 "name": "BaseBdev1", 00:16:32.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.518 "is_configured": false, 00:16:32.518 "data_offset": 0, 00:16:32.518 "data_size": 0 00:16:32.518 }, 00:16:32.518 { 00:16:32.518 "name": "BaseBdev2", 00:16:32.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.518 "is_configured": false, 00:16:32.518 "data_offset": 0, 00:16:32.518 "data_size": 0 00:16:32.518 }, 00:16:32.518 { 00:16:32.518 "name": "BaseBdev3", 00:16:32.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.518 "is_configured": false, 00:16:32.518 "data_offset": 0, 00:16:32.518 "data_size": 0 00:16:32.518 }, 00:16:32.518 { 00:16:32.518 "name": "BaseBdev4", 00:16:32.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.518 "is_configured": false, 00:16:32.518 "data_offset": 0, 00:16:32.518 "data_size": 0 00:16:32.518 } 00:16:32.518 ] 00:16:32.518 }' 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.518 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 [2024-09-28 16:17:47.634011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.089 [2024-09-28 16:17:47.634107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 [2024-09-28 16:17:47.642031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.089 [2024-09-28 16:17:47.642110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.089 [2024-09-28 16:17:47.642135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.089 [2024-09-28 16:17:47.642156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.089 [2024-09-28 16:17:47.642172] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.089 [2024-09-28 16:17:47.642191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.089 [2024-09-28 16:17:47.642207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.089 [2024-09-28 16:17:47.642235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 [2024-09-28 16:17:47.719474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.089 BaseBdev1 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 [ 00:16:33.089 { 00:16:33.089 "name": "BaseBdev1", 00:16:33.089 "aliases": [ 00:16:33.089 "dd6232d7-2036-400d-b70e-f8f4976de943" 00:16:33.089 ], 00:16:33.089 "product_name": "Malloc disk", 00:16:33.089 "block_size": 512, 00:16:33.089 "num_blocks": 65536, 00:16:33.089 "uuid": "dd6232d7-2036-400d-b70e-f8f4976de943", 00:16:33.089 "assigned_rate_limits": { 00:16:33.089 "rw_ios_per_sec": 0, 00:16:33.089 "rw_mbytes_per_sec": 0, 00:16:33.089 "r_mbytes_per_sec": 0, 00:16:33.089 "w_mbytes_per_sec": 0 00:16:33.089 }, 00:16:33.089 "claimed": true, 00:16:33.089 "claim_type": "exclusive_write", 00:16:33.089 "zoned": false, 00:16:33.089 "supported_io_types": { 00:16:33.089 "read": true, 00:16:33.089 "write": true, 00:16:33.089 "unmap": true, 00:16:33.089 "flush": true, 00:16:33.089 "reset": true, 00:16:33.089 "nvme_admin": false, 00:16:33.089 "nvme_io": false, 00:16:33.089 "nvme_io_md": false, 00:16:33.089 "write_zeroes": true, 00:16:33.089 "zcopy": true, 00:16:33.089 "get_zone_info": false, 00:16:33.089 "zone_management": false, 00:16:33.089 "zone_append": false, 00:16:33.089 "compare": false, 00:16:33.089 "compare_and_write": false, 00:16:33.089 "abort": true, 00:16:33.089 "seek_hole": false, 00:16:33.089 "seek_data": false, 00:16:33.089 "copy": true, 00:16:33.089 "nvme_iov_md": false 00:16:33.089 }, 00:16:33.089 "memory_domains": [ 00:16:33.089 { 00:16:33.089 "dma_device_id": "system", 00:16:33.089 "dma_device_type": 1 00:16:33.089 }, 00:16:33.089 { 00:16:33.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.089 "dma_device_type": 2 00:16:33.089 } 00:16:33.089 ], 00:16:33.089 "driver_specific": {} 00:16:33.089 } 00:16:33.089 ] 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.089 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.090 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.350 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.350 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.350 "name": "Existed_Raid", 00:16:33.350 "uuid": "2e691c0e-cc8d-4850-8891-059eb917815c", 00:16:33.350 "strip_size_kb": 64, 00:16:33.350 "state": "configuring", 00:16:33.350 "raid_level": "raid5f", 00:16:33.350 "superblock": true, 00:16:33.350 "num_base_bdevs": 4, 00:16:33.350 "num_base_bdevs_discovered": 1, 00:16:33.350 "num_base_bdevs_operational": 4, 00:16:33.350 "base_bdevs_list": [ 00:16:33.350 { 00:16:33.350 "name": "BaseBdev1", 00:16:33.350 "uuid": "dd6232d7-2036-400d-b70e-f8f4976de943", 00:16:33.350 "is_configured": true, 00:16:33.350 "data_offset": 2048, 00:16:33.350 "data_size": 63488 00:16:33.350 }, 00:16:33.350 { 00:16:33.350 "name": "BaseBdev2", 00:16:33.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.350 "is_configured": false, 00:16:33.350 "data_offset": 0, 00:16:33.350 "data_size": 0 00:16:33.350 }, 00:16:33.350 { 00:16:33.350 "name": "BaseBdev3", 00:16:33.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.350 "is_configured": false, 00:16:33.350 "data_offset": 0, 00:16:33.350 "data_size": 0 00:16:33.350 }, 00:16:33.350 { 00:16:33.350 "name": "BaseBdev4", 00:16:33.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.350 "is_configured": false, 00:16:33.350 "data_offset": 0, 00:16:33.350 "data_size": 0 00:16:33.350 } 00:16:33.350 ] 00:16:33.350 }' 00:16:33.350 16:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.350 16:17:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.610 [2024-09-28 16:17:48.242634] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.610 [2024-09-28 16:17:48.242671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.610 [2024-09-28 16:17:48.254668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.610 [2024-09-28 16:17:48.256319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.610 [2024-09-28 16:17:48.256407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.610 [2024-09-28 16:17:48.256421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.610 [2024-09-28 16:17:48.256432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.610 [2024-09-28 16:17:48.256438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.610 [2024-09-28 16:17:48.256447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.610 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.871 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.871 "name": "Existed_Raid", 00:16:33.871 "uuid": "3b9689d8-6560-4536-af8e-dc999885f123", 00:16:33.871 "strip_size_kb": 64, 00:16:33.871 "state": "configuring", 00:16:33.871 "raid_level": "raid5f", 00:16:33.871 "superblock": true, 00:16:33.871 "num_base_bdevs": 4, 00:16:33.871 "num_base_bdevs_discovered": 1, 00:16:33.871 "num_base_bdevs_operational": 4, 00:16:33.871 "base_bdevs_list": [ 00:16:33.871 { 00:16:33.871 "name": "BaseBdev1", 00:16:33.871 "uuid": "dd6232d7-2036-400d-b70e-f8f4976de943", 00:16:33.871 "is_configured": true, 00:16:33.871 "data_offset": 2048, 00:16:33.871 "data_size": 63488 00:16:33.871 }, 00:16:33.871 { 00:16:33.871 "name": "BaseBdev2", 00:16:33.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.871 "is_configured": false, 00:16:33.871 "data_offset": 0, 00:16:33.871 "data_size": 0 00:16:33.871 }, 00:16:33.871 { 00:16:33.871 "name": "BaseBdev3", 00:16:33.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.871 "is_configured": false, 00:16:33.871 "data_offset": 0, 00:16:33.871 "data_size": 0 00:16:33.871 }, 00:16:33.871 { 00:16:33.871 "name": "BaseBdev4", 00:16:33.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.871 "is_configured": false, 00:16:33.871 "data_offset": 0, 00:16:33.871 "data_size": 0 00:16:33.871 } 00:16:33.871 ] 00:16:33.871 }' 00:16:33.871 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.871 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.132 [2024-09-28 16:17:48.746029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.132 BaseBdev2 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.132 [ 00:16:34.132 { 00:16:34.132 "name": "BaseBdev2", 00:16:34.132 "aliases": [ 00:16:34.132 "1f9f627b-3914-4803-a803-491859d72fc6" 00:16:34.132 ], 00:16:34.132 "product_name": "Malloc disk", 00:16:34.132 "block_size": 512, 00:16:34.132 "num_blocks": 65536, 00:16:34.132 "uuid": "1f9f627b-3914-4803-a803-491859d72fc6", 00:16:34.132 "assigned_rate_limits": { 00:16:34.132 "rw_ios_per_sec": 0, 00:16:34.132 "rw_mbytes_per_sec": 0, 00:16:34.132 "r_mbytes_per_sec": 0, 00:16:34.132 "w_mbytes_per_sec": 0 00:16:34.132 }, 00:16:34.132 "claimed": true, 00:16:34.132 "claim_type": "exclusive_write", 00:16:34.132 "zoned": false, 00:16:34.132 "supported_io_types": { 00:16:34.132 "read": true, 00:16:34.132 "write": true, 00:16:34.132 "unmap": true, 00:16:34.132 "flush": true, 00:16:34.132 "reset": true, 00:16:34.132 "nvme_admin": false, 00:16:34.132 "nvme_io": false, 00:16:34.132 "nvme_io_md": false, 00:16:34.132 "write_zeroes": true, 00:16:34.132 "zcopy": true, 00:16:34.132 "get_zone_info": false, 00:16:34.132 "zone_management": false, 00:16:34.132 "zone_append": false, 00:16:34.132 "compare": false, 00:16:34.132 "compare_and_write": false, 00:16:34.132 "abort": true, 00:16:34.132 "seek_hole": false, 00:16:34.132 "seek_data": false, 00:16:34.132 "copy": true, 00:16:34.132 "nvme_iov_md": false 00:16:34.132 }, 00:16:34.132 "memory_domains": [ 00:16:34.132 { 00:16:34.132 "dma_device_id": "system", 00:16:34.132 "dma_device_type": 1 00:16:34.132 }, 00:16:34.132 { 00:16:34.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.132 "dma_device_type": 2 00:16:34.132 } 00:16:34.132 ], 00:16:34.132 "driver_specific": {} 00:16:34.132 } 00:16:34.132 ] 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.132 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.392 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.392 "name": "Existed_Raid", 00:16:34.392 "uuid": "3b9689d8-6560-4536-af8e-dc999885f123", 00:16:34.392 "strip_size_kb": 64, 00:16:34.392 "state": "configuring", 00:16:34.392 "raid_level": "raid5f", 00:16:34.392 "superblock": true, 00:16:34.392 "num_base_bdevs": 4, 00:16:34.392 "num_base_bdevs_discovered": 2, 00:16:34.392 "num_base_bdevs_operational": 4, 00:16:34.392 "base_bdevs_list": [ 00:16:34.392 { 00:16:34.392 "name": "BaseBdev1", 00:16:34.392 "uuid": "dd6232d7-2036-400d-b70e-f8f4976de943", 00:16:34.392 "is_configured": true, 00:16:34.392 "data_offset": 2048, 00:16:34.392 "data_size": 63488 00:16:34.392 }, 00:16:34.392 { 00:16:34.392 "name": "BaseBdev2", 00:16:34.392 "uuid": "1f9f627b-3914-4803-a803-491859d72fc6", 00:16:34.392 "is_configured": true, 00:16:34.392 "data_offset": 2048, 00:16:34.392 "data_size": 63488 00:16:34.392 }, 00:16:34.392 { 00:16:34.392 "name": "BaseBdev3", 00:16:34.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.392 "is_configured": false, 00:16:34.392 "data_offset": 0, 00:16:34.392 "data_size": 0 00:16:34.392 }, 00:16:34.392 { 00:16:34.393 "name": "BaseBdev4", 00:16:34.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.393 "is_configured": false, 00:16:34.393 "data_offset": 0, 00:16:34.393 "data_size": 0 00:16:34.393 } 00:16:34.393 ] 00:16:34.393 }' 00:16:34.393 16:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.393 16:17:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.653 [2024-09-28 16:17:49.287488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.653 BaseBdev3 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.653 [ 00:16:34.653 { 00:16:34.653 "name": "BaseBdev3", 00:16:34.653 "aliases": [ 00:16:34.653 "d8eae53b-226e-4e83-9e07-d5af09a43c63" 00:16:34.653 ], 00:16:34.653 "product_name": "Malloc disk", 00:16:34.653 "block_size": 512, 00:16:34.653 "num_blocks": 65536, 00:16:34.653 "uuid": "d8eae53b-226e-4e83-9e07-d5af09a43c63", 00:16:34.653 "assigned_rate_limits": { 00:16:34.653 "rw_ios_per_sec": 0, 00:16:34.653 "rw_mbytes_per_sec": 0, 00:16:34.653 "r_mbytes_per_sec": 0, 00:16:34.653 "w_mbytes_per_sec": 0 00:16:34.653 }, 00:16:34.653 "claimed": true, 00:16:34.653 "claim_type": "exclusive_write", 00:16:34.653 "zoned": false, 00:16:34.653 "supported_io_types": { 00:16:34.653 "read": true, 00:16:34.653 "write": true, 00:16:34.653 "unmap": true, 00:16:34.653 "flush": true, 00:16:34.653 "reset": true, 00:16:34.653 "nvme_admin": false, 00:16:34.653 "nvme_io": false, 00:16:34.653 "nvme_io_md": false, 00:16:34.653 "write_zeroes": true, 00:16:34.653 "zcopy": true, 00:16:34.653 "get_zone_info": false, 00:16:34.653 "zone_management": false, 00:16:34.653 "zone_append": false, 00:16:34.653 "compare": false, 00:16:34.653 "compare_and_write": false, 00:16:34.653 "abort": true, 00:16:34.653 "seek_hole": false, 00:16:34.653 "seek_data": false, 00:16:34.653 "copy": true, 00:16:34.653 "nvme_iov_md": false 00:16:34.653 }, 00:16:34.653 "memory_domains": [ 00:16:34.653 { 00:16:34.653 "dma_device_id": "system", 00:16:34.653 "dma_device_type": 1 00:16:34.653 }, 00:16:34.653 { 00:16:34.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.653 "dma_device_type": 2 00:16:34.653 } 00:16:34.653 ], 00:16:34.653 "driver_specific": {} 00:16:34.653 } 00:16:34.653 ] 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:34.653 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.654 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.914 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.914 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.914 "name": "Existed_Raid", 00:16:34.914 "uuid": "3b9689d8-6560-4536-af8e-dc999885f123", 00:16:34.914 "strip_size_kb": 64, 00:16:34.914 "state": "configuring", 00:16:34.914 "raid_level": "raid5f", 00:16:34.914 "superblock": true, 00:16:34.915 "num_base_bdevs": 4, 00:16:34.915 "num_base_bdevs_discovered": 3, 00:16:34.915 "num_base_bdevs_operational": 4, 00:16:34.915 "base_bdevs_list": [ 00:16:34.915 { 00:16:34.915 "name": "BaseBdev1", 00:16:34.915 "uuid": "dd6232d7-2036-400d-b70e-f8f4976de943", 00:16:34.915 "is_configured": true, 00:16:34.915 "data_offset": 2048, 00:16:34.915 "data_size": 63488 00:16:34.915 }, 00:16:34.915 { 00:16:34.915 "name": "BaseBdev2", 00:16:34.915 "uuid": "1f9f627b-3914-4803-a803-491859d72fc6", 00:16:34.915 "is_configured": true, 00:16:34.915 "data_offset": 2048, 00:16:34.915 "data_size": 63488 00:16:34.915 }, 00:16:34.915 { 00:16:34.915 "name": "BaseBdev3", 00:16:34.915 "uuid": "d8eae53b-226e-4e83-9e07-d5af09a43c63", 00:16:34.915 "is_configured": true, 00:16:34.915 "data_offset": 2048, 00:16:34.915 "data_size": 63488 00:16:34.915 }, 00:16:34.915 { 00:16:34.915 "name": "BaseBdev4", 00:16:34.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.915 "is_configured": false, 00:16:34.915 "data_offset": 0, 00:16:34.915 "data_size": 0 00:16:34.915 } 00:16:34.915 ] 00:16:34.915 }' 00:16:34.915 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.915 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.176 [2024-09-28 16:17:49.791437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.176 [2024-09-28 16:17:49.791702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:35.176 [2024-09-28 16:17:49.791719] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:35.176 [2024-09-28 16:17:49.791946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:35.176 BaseBdev4 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.176 [2024-09-28 16:17:49.798725] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:35.176 [2024-09-28 16:17:49.798750] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:35.176 [2024-09-28 16:17:49.798978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.176 [ 00:16:35.176 { 00:16:35.176 "name": "BaseBdev4", 00:16:35.176 "aliases": [ 00:16:35.176 "32fc46d2-ae33-4a9c-9cd9-17f89749b21b" 00:16:35.176 ], 00:16:35.176 "product_name": "Malloc disk", 00:16:35.176 "block_size": 512, 00:16:35.176 "num_blocks": 65536, 00:16:35.176 "uuid": "32fc46d2-ae33-4a9c-9cd9-17f89749b21b", 00:16:35.176 "assigned_rate_limits": { 00:16:35.176 "rw_ios_per_sec": 0, 00:16:35.176 "rw_mbytes_per_sec": 0, 00:16:35.176 "r_mbytes_per_sec": 0, 00:16:35.176 "w_mbytes_per_sec": 0 00:16:35.176 }, 00:16:35.176 "claimed": true, 00:16:35.176 "claim_type": "exclusive_write", 00:16:35.176 "zoned": false, 00:16:35.176 "supported_io_types": { 00:16:35.176 "read": true, 00:16:35.176 "write": true, 00:16:35.176 "unmap": true, 00:16:35.176 "flush": true, 00:16:35.176 "reset": true, 00:16:35.176 "nvme_admin": false, 00:16:35.176 "nvme_io": false, 00:16:35.176 "nvme_io_md": false, 00:16:35.176 "write_zeroes": true, 00:16:35.176 "zcopy": true, 00:16:35.176 "get_zone_info": false, 00:16:35.176 "zone_management": false, 00:16:35.176 "zone_append": false, 00:16:35.176 "compare": false, 00:16:35.176 "compare_and_write": false, 00:16:35.176 "abort": true, 00:16:35.176 "seek_hole": false, 00:16:35.176 "seek_data": false, 00:16:35.176 "copy": true, 00:16:35.176 "nvme_iov_md": false 00:16:35.176 }, 00:16:35.176 "memory_domains": [ 00:16:35.176 { 00:16:35.176 "dma_device_id": "system", 00:16:35.176 "dma_device_type": 1 00:16:35.176 }, 00:16:35.176 { 00:16:35.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.176 "dma_device_type": 2 00:16:35.176 } 00:16:35.176 ], 00:16:35.176 "driver_specific": {} 00:16:35.176 } 00:16:35.176 ] 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.176 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.435 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.435 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.435 "name": "Existed_Raid", 00:16:35.435 "uuid": "3b9689d8-6560-4536-af8e-dc999885f123", 00:16:35.435 "strip_size_kb": 64, 00:16:35.435 "state": "online", 00:16:35.435 "raid_level": "raid5f", 00:16:35.435 "superblock": true, 00:16:35.435 "num_base_bdevs": 4, 00:16:35.435 "num_base_bdevs_discovered": 4, 00:16:35.435 "num_base_bdevs_operational": 4, 00:16:35.435 "base_bdevs_list": [ 00:16:35.435 { 00:16:35.435 "name": "BaseBdev1", 00:16:35.435 "uuid": "dd6232d7-2036-400d-b70e-f8f4976de943", 00:16:35.435 "is_configured": true, 00:16:35.435 "data_offset": 2048, 00:16:35.435 "data_size": 63488 00:16:35.435 }, 00:16:35.435 { 00:16:35.435 "name": "BaseBdev2", 00:16:35.435 "uuid": "1f9f627b-3914-4803-a803-491859d72fc6", 00:16:35.435 "is_configured": true, 00:16:35.435 "data_offset": 2048, 00:16:35.435 "data_size": 63488 00:16:35.435 }, 00:16:35.435 { 00:16:35.435 "name": "BaseBdev3", 00:16:35.435 "uuid": "d8eae53b-226e-4e83-9e07-d5af09a43c63", 00:16:35.435 "is_configured": true, 00:16:35.435 "data_offset": 2048, 00:16:35.435 "data_size": 63488 00:16:35.435 }, 00:16:35.435 { 00:16:35.435 "name": "BaseBdev4", 00:16:35.435 "uuid": "32fc46d2-ae33-4a9c-9cd9-17f89749b21b", 00:16:35.435 "is_configured": true, 00:16:35.435 "data_offset": 2048, 00:16:35.435 "data_size": 63488 00:16:35.435 } 00:16:35.435 ] 00:16:35.435 }' 00:16:35.435 16:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.435 16:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.695 [2024-09-28 16:17:50.309902] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.695 "name": "Existed_Raid", 00:16:35.695 "aliases": [ 00:16:35.695 "3b9689d8-6560-4536-af8e-dc999885f123" 00:16:35.695 ], 00:16:35.695 "product_name": "Raid Volume", 00:16:35.695 "block_size": 512, 00:16:35.695 "num_blocks": 190464, 00:16:35.695 "uuid": "3b9689d8-6560-4536-af8e-dc999885f123", 00:16:35.695 "assigned_rate_limits": { 00:16:35.695 "rw_ios_per_sec": 0, 00:16:35.695 "rw_mbytes_per_sec": 0, 00:16:35.695 "r_mbytes_per_sec": 0, 00:16:35.695 "w_mbytes_per_sec": 0 00:16:35.695 }, 00:16:35.695 "claimed": false, 00:16:35.695 "zoned": false, 00:16:35.695 "supported_io_types": { 00:16:35.695 "read": true, 00:16:35.695 "write": true, 00:16:35.695 "unmap": false, 00:16:35.695 "flush": false, 00:16:35.695 "reset": true, 00:16:35.695 "nvme_admin": false, 00:16:35.695 "nvme_io": false, 00:16:35.695 "nvme_io_md": false, 00:16:35.695 "write_zeroes": true, 00:16:35.695 "zcopy": false, 00:16:35.695 "get_zone_info": false, 00:16:35.695 "zone_management": false, 00:16:35.695 "zone_append": false, 00:16:35.695 "compare": false, 00:16:35.695 "compare_and_write": false, 00:16:35.695 "abort": false, 00:16:35.695 "seek_hole": false, 00:16:35.695 "seek_data": false, 00:16:35.695 "copy": false, 00:16:35.695 "nvme_iov_md": false 00:16:35.695 }, 00:16:35.695 "driver_specific": { 00:16:35.695 "raid": { 00:16:35.695 "uuid": "3b9689d8-6560-4536-af8e-dc999885f123", 00:16:35.695 "strip_size_kb": 64, 00:16:35.695 "state": "online", 00:16:35.695 "raid_level": "raid5f", 00:16:35.695 "superblock": true, 00:16:35.695 "num_base_bdevs": 4, 00:16:35.695 "num_base_bdevs_discovered": 4, 00:16:35.695 "num_base_bdevs_operational": 4, 00:16:35.695 "base_bdevs_list": [ 00:16:35.695 { 00:16:35.695 "name": "BaseBdev1", 00:16:35.695 "uuid": "dd6232d7-2036-400d-b70e-f8f4976de943", 00:16:35.695 "is_configured": true, 00:16:35.695 "data_offset": 2048, 00:16:35.695 "data_size": 63488 00:16:35.695 }, 00:16:35.695 { 00:16:35.695 "name": "BaseBdev2", 00:16:35.695 "uuid": "1f9f627b-3914-4803-a803-491859d72fc6", 00:16:35.695 "is_configured": true, 00:16:35.695 "data_offset": 2048, 00:16:35.695 "data_size": 63488 00:16:35.695 }, 00:16:35.695 { 00:16:35.695 "name": "BaseBdev3", 00:16:35.695 "uuid": "d8eae53b-226e-4e83-9e07-d5af09a43c63", 00:16:35.695 "is_configured": true, 00:16:35.695 "data_offset": 2048, 00:16:35.695 "data_size": 63488 00:16:35.695 }, 00:16:35.695 { 00:16:35.695 "name": "BaseBdev4", 00:16:35.695 "uuid": "32fc46d2-ae33-4a9c-9cd9-17f89749b21b", 00:16:35.695 "is_configured": true, 00:16:35.695 "data_offset": 2048, 00:16:35.695 "data_size": 63488 00:16:35.695 } 00:16:35.695 ] 00:16:35.695 } 00:16:35.695 } 00:16:35.695 }' 00:16:35.695 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:35.956 BaseBdev2 00:16:35.956 BaseBdev3 00:16:35.956 BaseBdev4' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.956 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.956 [2024-09-28 16:17:50.597346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.216 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.216 "name": "Existed_Raid", 00:16:36.216 "uuid": "3b9689d8-6560-4536-af8e-dc999885f123", 00:16:36.216 "strip_size_kb": 64, 00:16:36.216 "state": "online", 00:16:36.216 "raid_level": "raid5f", 00:16:36.216 "superblock": true, 00:16:36.216 "num_base_bdevs": 4, 00:16:36.216 "num_base_bdevs_discovered": 3, 00:16:36.216 "num_base_bdevs_operational": 3, 00:16:36.216 "base_bdevs_list": [ 00:16:36.216 { 00:16:36.216 "name": null, 00:16:36.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.216 "is_configured": false, 00:16:36.216 "data_offset": 0, 00:16:36.216 "data_size": 63488 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "name": "BaseBdev2", 00:16:36.216 "uuid": "1f9f627b-3914-4803-a803-491859d72fc6", 00:16:36.216 "is_configured": true, 00:16:36.216 "data_offset": 2048, 00:16:36.216 "data_size": 63488 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "name": "BaseBdev3", 00:16:36.216 "uuid": "d8eae53b-226e-4e83-9e07-d5af09a43c63", 00:16:36.216 "is_configured": true, 00:16:36.216 "data_offset": 2048, 00:16:36.216 "data_size": 63488 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "name": "BaseBdev4", 00:16:36.216 "uuid": "32fc46d2-ae33-4a9c-9cd9-17f89749b21b", 00:16:36.217 "is_configured": true, 00:16:36.217 "data_offset": 2048, 00:16:36.217 "data_size": 63488 00:16:36.217 } 00:16:36.217 ] 00:16:36.217 }' 00:16:36.217 16:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.217 16:17:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.476 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:36.476 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.477 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.477 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:36.477 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.477 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.737 [2024-09-28 16:17:51.193026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.737 [2024-09-28 16:17:51.193276] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.737 [2024-09-28 16:17:51.279790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.737 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.737 [2024-09-28 16:17:51.339717] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:36.996 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.996 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.996 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 [2024-09-28 16:17:51.485018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:36.997 [2024-09-28 16:17:51.485138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 BaseBdev2 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.257 [ 00:16:37.257 { 00:16:37.257 "name": "BaseBdev2", 00:16:37.257 "aliases": [ 00:16:37.257 "7cab4548-8edc-4153-a9a8-8f9bc6775e8f" 00:16:37.257 ], 00:16:37.257 "product_name": "Malloc disk", 00:16:37.257 "block_size": 512, 00:16:37.257 "num_blocks": 65536, 00:16:37.257 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:37.257 "assigned_rate_limits": { 00:16:37.257 "rw_ios_per_sec": 0, 00:16:37.257 "rw_mbytes_per_sec": 0, 00:16:37.257 "r_mbytes_per_sec": 0, 00:16:37.257 "w_mbytes_per_sec": 0 00:16:37.257 }, 00:16:37.257 "claimed": false, 00:16:37.257 "zoned": false, 00:16:37.257 "supported_io_types": { 00:16:37.257 "read": true, 00:16:37.257 "write": true, 00:16:37.257 "unmap": true, 00:16:37.257 "flush": true, 00:16:37.257 "reset": true, 00:16:37.257 "nvme_admin": false, 00:16:37.257 "nvme_io": false, 00:16:37.257 "nvme_io_md": false, 00:16:37.257 "write_zeroes": true, 00:16:37.257 "zcopy": true, 00:16:37.257 "get_zone_info": false, 00:16:37.257 "zone_management": false, 00:16:37.257 "zone_append": false, 00:16:37.257 "compare": false, 00:16:37.257 "compare_and_write": false, 00:16:37.257 "abort": true, 00:16:37.257 "seek_hole": false, 00:16:37.257 "seek_data": false, 00:16:37.257 "copy": true, 00:16:37.257 "nvme_iov_md": false 00:16:37.257 }, 00:16:37.257 "memory_domains": [ 00:16:37.257 { 00:16:37.257 "dma_device_id": "system", 00:16:37.257 "dma_device_type": 1 00:16:37.257 }, 00:16:37.257 { 00:16:37.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.257 "dma_device_type": 2 00:16:37.257 } 00:16:37.257 ], 00:16:37.257 "driver_specific": {} 00:16:37.257 } 00:16:37.257 ] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.257 BaseBdev3 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.257 [ 00:16:37.257 { 00:16:37.257 "name": "BaseBdev3", 00:16:37.257 "aliases": [ 00:16:37.257 "6c86bdba-f016-4510-a730-8ed95f98f737" 00:16:37.257 ], 00:16:37.257 "product_name": "Malloc disk", 00:16:37.257 "block_size": 512, 00:16:37.257 "num_blocks": 65536, 00:16:37.257 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:37.257 "assigned_rate_limits": { 00:16:37.257 "rw_ios_per_sec": 0, 00:16:37.257 "rw_mbytes_per_sec": 0, 00:16:37.257 "r_mbytes_per_sec": 0, 00:16:37.257 "w_mbytes_per_sec": 0 00:16:37.257 }, 00:16:37.257 "claimed": false, 00:16:37.257 "zoned": false, 00:16:37.257 "supported_io_types": { 00:16:37.257 "read": true, 00:16:37.257 "write": true, 00:16:37.257 "unmap": true, 00:16:37.257 "flush": true, 00:16:37.257 "reset": true, 00:16:37.257 "nvme_admin": false, 00:16:37.257 "nvme_io": false, 00:16:37.257 "nvme_io_md": false, 00:16:37.257 "write_zeroes": true, 00:16:37.257 "zcopy": true, 00:16:37.257 "get_zone_info": false, 00:16:37.257 "zone_management": false, 00:16:37.257 "zone_append": false, 00:16:37.257 "compare": false, 00:16:37.257 "compare_and_write": false, 00:16:37.257 "abort": true, 00:16:37.257 "seek_hole": false, 00:16:37.257 "seek_data": false, 00:16:37.257 "copy": true, 00:16:37.257 "nvme_iov_md": false 00:16:37.257 }, 00:16:37.257 "memory_domains": [ 00:16:37.257 { 00:16:37.257 "dma_device_id": "system", 00:16:37.257 "dma_device_type": 1 00:16:37.257 }, 00:16:37.257 { 00:16:37.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.257 "dma_device_type": 2 00:16:37.257 } 00:16:37.257 ], 00:16:37.257 "driver_specific": {} 00:16:37.257 } 00:16:37.257 ] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.257 BaseBdev4 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:37.257 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.258 [ 00:16:37.258 { 00:16:37.258 "name": "BaseBdev4", 00:16:37.258 "aliases": [ 00:16:37.258 "82e526eb-7b24-4480-a8ef-60e86b175b26" 00:16:37.258 ], 00:16:37.258 "product_name": "Malloc disk", 00:16:37.258 "block_size": 512, 00:16:37.258 "num_blocks": 65536, 00:16:37.258 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:37.258 "assigned_rate_limits": { 00:16:37.258 "rw_ios_per_sec": 0, 00:16:37.258 "rw_mbytes_per_sec": 0, 00:16:37.258 "r_mbytes_per_sec": 0, 00:16:37.258 "w_mbytes_per_sec": 0 00:16:37.258 }, 00:16:37.258 "claimed": false, 00:16:37.258 "zoned": false, 00:16:37.258 "supported_io_types": { 00:16:37.258 "read": true, 00:16:37.258 "write": true, 00:16:37.258 "unmap": true, 00:16:37.258 "flush": true, 00:16:37.258 "reset": true, 00:16:37.258 "nvme_admin": false, 00:16:37.258 "nvme_io": false, 00:16:37.258 "nvme_io_md": false, 00:16:37.258 "write_zeroes": true, 00:16:37.258 "zcopy": true, 00:16:37.258 "get_zone_info": false, 00:16:37.258 "zone_management": false, 00:16:37.258 "zone_append": false, 00:16:37.258 "compare": false, 00:16:37.258 "compare_and_write": false, 00:16:37.258 "abort": true, 00:16:37.258 "seek_hole": false, 00:16:37.258 "seek_data": false, 00:16:37.258 "copy": true, 00:16:37.258 "nvme_iov_md": false 00:16:37.258 }, 00:16:37.258 "memory_domains": [ 00:16:37.258 { 00:16:37.258 "dma_device_id": "system", 00:16:37.258 "dma_device_type": 1 00:16:37.258 }, 00:16:37.258 { 00:16:37.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.258 "dma_device_type": 2 00:16:37.258 } 00:16:37.258 ], 00:16:37.258 "driver_specific": {} 00:16:37.258 } 00:16:37.258 ] 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.258 [2024-09-28 16:17:51.858981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.258 [2024-09-28 16:17:51.859038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.258 [2024-09-28 16:17:51.859056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.258 [2024-09-28 16:17:51.860827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.258 [2024-09-28 16:17:51.860916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.258 "name": "Existed_Raid", 00:16:37.258 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:37.258 "strip_size_kb": 64, 00:16:37.258 "state": "configuring", 00:16:37.258 "raid_level": "raid5f", 00:16:37.258 "superblock": true, 00:16:37.258 "num_base_bdevs": 4, 00:16:37.258 "num_base_bdevs_discovered": 3, 00:16:37.258 "num_base_bdevs_operational": 4, 00:16:37.258 "base_bdevs_list": [ 00:16:37.258 { 00:16:37.258 "name": "BaseBdev1", 00:16:37.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.258 "is_configured": false, 00:16:37.258 "data_offset": 0, 00:16:37.258 "data_size": 0 00:16:37.258 }, 00:16:37.258 { 00:16:37.258 "name": "BaseBdev2", 00:16:37.258 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:37.258 "is_configured": true, 00:16:37.258 "data_offset": 2048, 00:16:37.258 "data_size": 63488 00:16:37.258 }, 00:16:37.258 { 00:16:37.258 "name": "BaseBdev3", 00:16:37.258 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:37.258 "is_configured": true, 00:16:37.258 "data_offset": 2048, 00:16:37.258 "data_size": 63488 00:16:37.258 }, 00:16:37.258 { 00:16:37.258 "name": "BaseBdev4", 00:16:37.258 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:37.258 "is_configured": true, 00:16:37.258 "data_offset": 2048, 00:16:37.258 "data_size": 63488 00:16:37.258 } 00:16:37.258 ] 00:16:37.258 }' 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.258 16:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.828 [2024-09-28 16:17:52.286209] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.828 "name": "Existed_Raid", 00:16:37.828 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:37.828 "strip_size_kb": 64, 00:16:37.828 "state": "configuring", 00:16:37.828 "raid_level": "raid5f", 00:16:37.828 "superblock": true, 00:16:37.828 "num_base_bdevs": 4, 00:16:37.828 "num_base_bdevs_discovered": 2, 00:16:37.828 "num_base_bdevs_operational": 4, 00:16:37.828 "base_bdevs_list": [ 00:16:37.828 { 00:16:37.828 "name": "BaseBdev1", 00:16:37.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.828 "is_configured": false, 00:16:37.828 "data_offset": 0, 00:16:37.828 "data_size": 0 00:16:37.828 }, 00:16:37.828 { 00:16:37.828 "name": null, 00:16:37.828 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:37.828 "is_configured": false, 00:16:37.828 "data_offset": 0, 00:16:37.828 "data_size": 63488 00:16:37.828 }, 00:16:37.828 { 00:16:37.828 "name": "BaseBdev3", 00:16:37.828 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:37.828 "is_configured": true, 00:16:37.828 "data_offset": 2048, 00:16:37.828 "data_size": 63488 00:16:37.828 }, 00:16:37.828 { 00:16:37.828 "name": "BaseBdev4", 00:16:37.828 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:37.828 "is_configured": true, 00:16:37.828 "data_offset": 2048, 00:16:37.828 "data_size": 63488 00:16:37.828 } 00:16:37.828 ] 00:16:37.828 }' 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.828 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.398 [2024-09-28 16:17:52.866998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.398 BaseBdev1 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:38.398 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.399 [ 00:16:38.399 { 00:16:38.399 "name": "BaseBdev1", 00:16:38.399 "aliases": [ 00:16:38.399 "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626" 00:16:38.399 ], 00:16:38.399 "product_name": "Malloc disk", 00:16:38.399 "block_size": 512, 00:16:38.399 "num_blocks": 65536, 00:16:38.399 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:38.399 "assigned_rate_limits": { 00:16:38.399 "rw_ios_per_sec": 0, 00:16:38.399 "rw_mbytes_per_sec": 0, 00:16:38.399 "r_mbytes_per_sec": 0, 00:16:38.399 "w_mbytes_per_sec": 0 00:16:38.399 }, 00:16:38.399 "claimed": true, 00:16:38.399 "claim_type": "exclusive_write", 00:16:38.399 "zoned": false, 00:16:38.399 "supported_io_types": { 00:16:38.399 "read": true, 00:16:38.399 "write": true, 00:16:38.399 "unmap": true, 00:16:38.399 "flush": true, 00:16:38.399 "reset": true, 00:16:38.399 "nvme_admin": false, 00:16:38.399 "nvme_io": false, 00:16:38.399 "nvme_io_md": false, 00:16:38.399 "write_zeroes": true, 00:16:38.399 "zcopy": true, 00:16:38.399 "get_zone_info": false, 00:16:38.399 "zone_management": false, 00:16:38.399 "zone_append": false, 00:16:38.399 "compare": false, 00:16:38.399 "compare_and_write": false, 00:16:38.399 "abort": true, 00:16:38.399 "seek_hole": false, 00:16:38.399 "seek_data": false, 00:16:38.399 "copy": true, 00:16:38.399 "nvme_iov_md": false 00:16:38.399 }, 00:16:38.399 "memory_domains": [ 00:16:38.399 { 00:16:38.399 "dma_device_id": "system", 00:16:38.399 "dma_device_type": 1 00:16:38.399 }, 00:16:38.399 { 00:16:38.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.399 "dma_device_type": 2 00:16:38.399 } 00:16:38.399 ], 00:16:38.399 "driver_specific": {} 00:16:38.399 } 00:16:38.399 ] 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.399 "name": "Existed_Raid", 00:16:38.399 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:38.399 "strip_size_kb": 64, 00:16:38.399 "state": "configuring", 00:16:38.399 "raid_level": "raid5f", 00:16:38.399 "superblock": true, 00:16:38.399 "num_base_bdevs": 4, 00:16:38.399 "num_base_bdevs_discovered": 3, 00:16:38.399 "num_base_bdevs_operational": 4, 00:16:38.399 "base_bdevs_list": [ 00:16:38.399 { 00:16:38.399 "name": "BaseBdev1", 00:16:38.399 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:38.399 "is_configured": true, 00:16:38.399 "data_offset": 2048, 00:16:38.399 "data_size": 63488 00:16:38.399 }, 00:16:38.399 { 00:16:38.399 "name": null, 00:16:38.399 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:38.399 "is_configured": false, 00:16:38.399 "data_offset": 0, 00:16:38.399 "data_size": 63488 00:16:38.399 }, 00:16:38.399 { 00:16:38.399 "name": "BaseBdev3", 00:16:38.399 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:38.399 "is_configured": true, 00:16:38.399 "data_offset": 2048, 00:16:38.399 "data_size": 63488 00:16:38.399 }, 00:16:38.399 { 00:16:38.399 "name": "BaseBdev4", 00:16:38.399 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:38.399 "is_configured": true, 00:16:38.399 "data_offset": 2048, 00:16:38.399 "data_size": 63488 00:16:38.399 } 00:16:38.399 ] 00:16:38.399 }' 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.399 16:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.968 [2024-09-28 16:17:53.418070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.968 "name": "Existed_Raid", 00:16:38.968 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:38.968 "strip_size_kb": 64, 00:16:38.968 "state": "configuring", 00:16:38.968 "raid_level": "raid5f", 00:16:38.968 "superblock": true, 00:16:38.968 "num_base_bdevs": 4, 00:16:38.968 "num_base_bdevs_discovered": 2, 00:16:38.968 "num_base_bdevs_operational": 4, 00:16:38.968 "base_bdevs_list": [ 00:16:38.968 { 00:16:38.968 "name": "BaseBdev1", 00:16:38.968 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:38.968 "is_configured": true, 00:16:38.968 "data_offset": 2048, 00:16:38.968 "data_size": 63488 00:16:38.968 }, 00:16:38.968 { 00:16:38.968 "name": null, 00:16:38.968 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:38.968 "is_configured": false, 00:16:38.968 "data_offset": 0, 00:16:38.968 "data_size": 63488 00:16:38.968 }, 00:16:38.968 { 00:16:38.968 "name": null, 00:16:38.968 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:38.968 "is_configured": false, 00:16:38.968 "data_offset": 0, 00:16:38.968 "data_size": 63488 00:16:38.968 }, 00:16:38.968 { 00:16:38.968 "name": "BaseBdev4", 00:16:38.968 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:38.968 "is_configured": true, 00:16:38.968 "data_offset": 2048, 00:16:38.968 "data_size": 63488 00:16:38.968 } 00:16:38.968 ] 00:16:38.968 }' 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.968 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.232 [2024-09-28 16:17:53.893316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.232 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.504 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.504 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.504 "name": "Existed_Raid", 00:16:39.504 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:39.504 "strip_size_kb": 64, 00:16:39.504 "state": "configuring", 00:16:39.504 "raid_level": "raid5f", 00:16:39.504 "superblock": true, 00:16:39.504 "num_base_bdevs": 4, 00:16:39.504 "num_base_bdevs_discovered": 3, 00:16:39.504 "num_base_bdevs_operational": 4, 00:16:39.504 "base_bdevs_list": [ 00:16:39.504 { 00:16:39.504 "name": "BaseBdev1", 00:16:39.504 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:39.504 "is_configured": true, 00:16:39.504 "data_offset": 2048, 00:16:39.504 "data_size": 63488 00:16:39.504 }, 00:16:39.504 { 00:16:39.504 "name": null, 00:16:39.504 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:39.504 "is_configured": false, 00:16:39.504 "data_offset": 0, 00:16:39.504 "data_size": 63488 00:16:39.504 }, 00:16:39.504 { 00:16:39.504 "name": "BaseBdev3", 00:16:39.504 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:39.504 "is_configured": true, 00:16:39.504 "data_offset": 2048, 00:16:39.504 "data_size": 63488 00:16:39.504 }, 00:16:39.504 { 00:16:39.504 "name": "BaseBdev4", 00:16:39.504 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:39.504 "is_configured": true, 00:16:39.504 "data_offset": 2048, 00:16:39.504 "data_size": 63488 00:16:39.504 } 00:16:39.504 ] 00:16:39.504 }' 00:16:39.505 16:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.505 16:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.783 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.783 [2024-09-28 16:17:54.420397] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.067 "name": "Existed_Raid", 00:16:40.067 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:40.067 "strip_size_kb": 64, 00:16:40.067 "state": "configuring", 00:16:40.067 "raid_level": "raid5f", 00:16:40.067 "superblock": true, 00:16:40.067 "num_base_bdevs": 4, 00:16:40.067 "num_base_bdevs_discovered": 2, 00:16:40.067 "num_base_bdevs_operational": 4, 00:16:40.067 "base_bdevs_list": [ 00:16:40.067 { 00:16:40.067 "name": null, 00:16:40.067 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:40.067 "is_configured": false, 00:16:40.067 "data_offset": 0, 00:16:40.067 "data_size": 63488 00:16:40.067 }, 00:16:40.067 { 00:16:40.067 "name": null, 00:16:40.067 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:40.067 "is_configured": false, 00:16:40.067 "data_offset": 0, 00:16:40.067 "data_size": 63488 00:16:40.067 }, 00:16:40.067 { 00:16:40.067 "name": "BaseBdev3", 00:16:40.067 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:40.067 "is_configured": true, 00:16:40.067 "data_offset": 2048, 00:16:40.067 "data_size": 63488 00:16:40.067 }, 00:16:40.067 { 00:16:40.067 "name": "BaseBdev4", 00:16:40.067 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:40.067 "is_configured": true, 00:16:40.067 "data_offset": 2048, 00:16:40.067 "data_size": 63488 00:16:40.067 } 00:16:40.067 ] 00:16:40.067 }' 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.067 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.342 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:40.342 16:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.342 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.342 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.342 16:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.342 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:40.342 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:40.342 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.342 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.342 [2024-09-28 16:17:55.019056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.601 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.601 "name": "Existed_Raid", 00:16:40.601 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:40.601 "strip_size_kb": 64, 00:16:40.601 "state": "configuring", 00:16:40.601 "raid_level": "raid5f", 00:16:40.601 "superblock": true, 00:16:40.601 "num_base_bdevs": 4, 00:16:40.601 "num_base_bdevs_discovered": 3, 00:16:40.601 "num_base_bdevs_operational": 4, 00:16:40.601 "base_bdevs_list": [ 00:16:40.601 { 00:16:40.601 "name": null, 00:16:40.601 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:40.601 "is_configured": false, 00:16:40.602 "data_offset": 0, 00:16:40.602 "data_size": 63488 00:16:40.602 }, 00:16:40.602 { 00:16:40.602 "name": "BaseBdev2", 00:16:40.602 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:40.602 "is_configured": true, 00:16:40.602 "data_offset": 2048, 00:16:40.602 "data_size": 63488 00:16:40.602 }, 00:16:40.602 { 00:16:40.602 "name": "BaseBdev3", 00:16:40.602 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:40.602 "is_configured": true, 00:16:40.602 "data_offset": 2048, 00:16:40.602 "data_size": 63488 00:16:40.602 }, 00:16:40.602 { 00:16:40.602 "name": "BaseBdev4", 00:16:40.602 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:40.602 "is_configured": true, 00:16:40.602 "data_offset": 2048, 00:16:40.602 "data_size": 63488 00:16:40.602 } 00:16:40.602 ] 00:16:40.602 }' 00:16:40.602 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.602 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 875c40d1-84c1-4d0b-b88f-ad9f6ee7f626 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.861 [2024-09-28 16:17:55.532047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:40.861 [2024-09-28 16:17:55.532361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:40.861 [2024-09-28 16:17:55.532396] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:40.861 [2024-09-28 16:17:55.532661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:40.861 NewBaseBdev 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:40.861 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:40.862 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:40.862 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:40.862 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:40.862 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:40.862 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.862 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.862 [2024-09-28 16:17:55.539055] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:40.862 [2024-09-28 16:17:55.539080] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:40.862 [2024-09-28 16:17:55.539238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.121 [ 00:16:41.121 { 00:16:41.121 "name": "NewBaseBdev", 00:16:41.121 "aliases": [ 00:16:41.121 "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626" 00:16:41.121 ], 00:16:41.121 "product_name": "Malloc disk", 00:16:41.121 "block_size": 512, 00:16:41.121 "num_blocks": 65536, 00:16:41.121 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:41.121 "assigned_rate_limits": { 00:16:41.121 "rw_ios_per_sec": 0, 00:16:41.121 "rw_mbytes_per_sec": 0, 00:16:41.121 "r_mbytes_per_sec": 0, 00:16:41.121 "w_mbytes_per_sec": 0 00:16:41.121 }, 00:16:41.121 "claimed": true, 00:16:41.121 "claim_type": "exclusive_write", 00:16:41.121 "zoned": false, 00:16:41.121 "supported_io_types": { 00:16:41.121 "read": true, 00:16:41.121 "write": true, 00:16:41.121 "unmap": true, 00:16:41.121 "flush": true, 00:16:41.121 "reset": true, 00:16:41.121 "nvme_admin": false, 00:16:41.121 "nvme_io": false, 00:16:41.121 "nvme_io_md": false, 00:16:41.121 "write_zeroes": true, 00:16:41.121 "zcopy": true, 00:16:41.121 "get_zone_info": false, 00:16:41.121 "zone_management": false, 00:16:41.121 "zone_append": false, 00:16:41.121 "compare": false, 00:16:41.121 "compare_and_write": false, 00:16:41.121 "abort": true, 00:16:41.121 "seek_hole": false, 00:16:41.121 "seek_data": false, 00:16:41.121 "copy": true, 00:16:41.121 "nvme_iov_md": false 00:16:41.121 }, 00:16:41.121 "memory_domains": [ 00:16:41.121 { 00:16:41.121 "dma_device_id": "system", 00:16:41.121 "dma_device_type": 1 00:16:41.121 }, 00:16:41.121 { 00:16:41.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.121 "dma_device_type": 2 00:16:41.121 } 00:16:41.121 ], 00:16:41.121 "driver_specific": {} 00:16:41.121 } 00:16:41.121 ] 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.121 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.122 "name": "Existed_Raid", 00:16:41.122 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:41.122 "strip_size_kb": 64, 00:16:41.122 "state": "online", 00:16:41.122 "raid_level": "raid5f", 00:16:41.122 "superblock": true, 00:16:41.122 "num_base_bdevs": 4, 00:16:41.122 "num_base_bdevs_discovered": 4, 00:16:41.122 "num_base_bdevs_operational": 4, 00:16:41.122 "base_bdevs_list": [ 00:16:41.122 { 00:16:41.122 "name": "NewBaseBdev", 00:16:41.122 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:41.122 "is_configured": true, 00:16:41.122 "data_offset": 2048, 00:16:41.122 "data_size": 63488 00:16:41.122 }, 00:16:41.122 { 00:16:41.122 "name": "BaseBdev2", 00:16:41.122 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:41.122 "is_configured": true, 00:16:41.122 "data_offset": 2048, 00:16:41.122 "data_size": 63488 00:16:41.122 }, 00:16:41.122 { 00:16:41.122 "name": "BaseBdev3", 00:16:41.122 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:41.122 "is_configured": true, 00:16:41.122 "data_offset": 2048, 00:16:41.122 "data_size": 63488 00:16:41.122 }, 00:16:41.122 { 00:16:41.122 "name": "BaseBdev4", 00:16:41.122 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:41.122 "is_configured": true, 00:16:41.122 "data_offset": 2048, 00:16:41.122 "data_size": 63488 00:16:41.122 } 00:16:41.122 ] 00:16:41.122 }' 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.122 16:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.392 [2024-09-28 16:17:56.042969] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.392 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.652 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:41.652 "name": "Existed_Raid", 00:16:41.652 "aliases": [ 00:16:41.652 "91ebccb3-dbcc-4066-aaf7-e28b1da059f4" 00:16:41.652 ], 00:16:41.652 "product_name": "Raid Volume", 00:16:41.652 "block_size": 512, 00:16:41.652 "num_blocks": 190464, 00:16:41.652 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:41.652 "assigned_rate_limits": { 00:16:41.652 "rw_ios_per_sec": 0, 00:16:41.652 "rw_mbytes_per_sec": 0, 00:16:41.652 "r_mbytes_per_sec": 0, 00:16:41.652 "w_mbytes_per_sec": 0 00:16:41.652 }, 00:16:41.652 "claimed": false, 00:16:41.652 "zoned": false, 00:16:41.652 "supported_io_types": { 00:16:41.652 "read": true, 00:16:41.652 "write": true, 00:16:41.652 "unmap": false, 00:16:41.652 "flush": false, 00:16:41.652 "reset": true, 00:16:41.652 "nvme_admin": false, 00:16:41.652 "nvme_io": false, 00:16:41.652 "nvme_io_md": false, 00:16:41.652 "write_zeroes": true, 00:16:41.652 "zcopy": false, 00:16:41.652 "get_zone_info": false, 00:16:41.652 "zone_management": false, 00:16:41.652 "zone_append": false, 00:16:41.652 "compare": false, 00:16:41.652 "compare_and_write": false, 00:16:41.652 "abort": false, 00:16:41.652 "seek_hole": false, 00:16:41.652 "seek_data": false, 00:16:41.652 "copy": false, 00:16:41.652 "nvme_iov_md": false 00:16:41.652 }, 00:16:41.652 "driver_specific": { 00:16:41.652 "raid": { 00:16:41.652 "uuid": "91ebccb3-dbcc-4066-aaf7-e28b1da059f4", 00:16:41.652 "strip_size_kb": 64, 00:16:41.652 "state": "online", 00:16:41.652 "raid_level": "raid5f", 00:16:41.652 "superblock": true, 00:16:41.652 "num_base_bdevs": 4, 00:16:41.653 "num_base_bdevs_discovered": 4, 00:16:41.653 "num_base_bdevs_operational": 4, 00:16:41.653 "base_bdevs_list": [ 00:16:41.653 { 00:16:41.653 "name": "NewBaseBdev", 00:16:41.653 "uuid": "875c40d1-84c1-4d0b-b88f-ad9f6ee7f626", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev2", 00:16:41.653 "uuid": "7cab4548-8edc-4153-a9a8-8f9bc6775e8f", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev3", 00:16:41.653 "uuid": "6c86bdba-f016-4510-a730-8ed95f98f737", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev4", 00:16:41.653 "uuid": "82e526eb-7b24-4480-a8ef-60e86b175b26", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 } 00:16:41.653 ] 00:16:41.653 } 00:16:41.653 } 00:16:41.653 }' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:41.653 BaseBdev2 00:16:41.653 BaseBdev3 00:16:41.653 BaseBdev4' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.653 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.913 [2024-09-28 16:17:56.366257] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.913 [2024-09-28 16:17:56.366281] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.913 [2024-09-28 16:17:56.366338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.913 [2024-09-28 16:17:56.366586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.913 [2024-09-28 16:17:56.366605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83442 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83442 ']' 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83442 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83442 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83442' 00:16:41.913 killing process with pid 83442 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83442 00:16:41.913 [2024-09-28 16:17:56.416168] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.913 16:17:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83442 00:16:42.172 [2024-09-28 16:17:56.781243] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.553 16:17:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:43.553 00:16:43.553 real 0m11.738s 00:16:43.553 user 0m18.675s 00:16:43.553 sys 0m2.200s 00:16:43.553 16:17:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.553 16:17:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.553 ************************************ 00:16:43.553 END TEST raid5f_state_function_test_sb 00:16:43.553 ************************************ 00:16:43.553 16:17:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:43.553 16:17:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:43.553 16:17:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.553 16:17:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.553 ************************************ 00:16:43.553 START TEST raid5f_superblock_test 00:16:43.553 ************************************ 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84119 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84119 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84119 ']' 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.553 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.553 [2024-09-28 16:17:58.133115] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:43.553 [2024-09-28 16:17:58.133273] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84119 ] 00:16:43.813 [2024-09-28 16:17:58.299916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.813 [2024-09-28 16:17:58.492907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.072 [2024-09-28 16:17:58.684664] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.072 [2024-09-28 16:17:58.684714] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.332 malloc1 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.332 16:17:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.332 [2024-09-28 16:17:59.003894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:44.332 [2024-09-28 16:17:59.004048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.332 [2024-09-28 16:17:59.004089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:44.332 [2024-09-28 16:17:59.004121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.332 [2024-09-28 16:17:59.005997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.332 [2024-09-28 16:17:59.006066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:44.332 pt1 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.332 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 malloc2 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 [2024-09-28 16:17:59.093136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.593 [2024-09-28 16:17:59.093191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.593 [2024-09-28 16:17:59.093212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:44.593 [2024-09-28 16:17:59.093221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.593 [2024-09-28 16:17:59.095300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.593 [2024-09-28 16:17:59.095335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.593 pt2 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 malloc3 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 [2024-09-28 16:17:59.146549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:44.593 [2024-09-28 16:17:59.146675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.593 [2024-09-28 16:17:59.146709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:44.593 [2024-09-28 16:17:59.146737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.593 [2024-09-28 16:17:59.148604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.593 [2024-09-28 16:17:59.148675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:44.593 pt3 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 malloc4 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 [2024-09-28 16:17:59.203372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:44.593 [2024-09-28 16:17:59.203474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.593 [2024-09-28 16:17:59.203513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:44.593 [2024-09-28 16:17:59.203542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.593 [2024-09-28 16:17:59.205392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.593 [2024-09-28 16:17:59.205456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:44.593 pt4 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.593 [2024-09-28 16:17:59.215412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.593 [2024-09-28 16:17:59.217035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.593 [2024-09-28 16:17:59.217132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.593 [2024-09-28 16:17:59.217208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:44.593 [2024-09-28 16:17:59.217427] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:44.593 [2024-09-28 16:17:59.217487] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.593 [2024-09-28 16:17:59.217733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:44.593 [2024-09-28 16:17:59.224333] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:44.593 [2024-09-28 16:17:59.224386] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:44.593 [2024-09-28 16:17:59.224581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.593 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.594 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.594 "name": "raid_bdev1", 00:16:44.594 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:44.594 "strip_size_kb": 64, 00:16:44.594 "state": "online", 00:16:44.594 "raid_level": "raid5f", 00:16:44.594 "superblock": true, 00:16:44.594 "num_base_bdevs": 4, 00:16:44.594 "num_base_bdevs_discovered": 4, 00:16:44.594 "num_base_bdevs_operational": 4, 00:16:44.594 "base_bdevs_list": [ 00:16:44.594 { 00:16:44.594 "name": "pt1", 00:16:44.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.594 "is_configured": true, 00:16:44.594 "data_offset": 2048, 00:16:44.594 "data_size": 63488 00:16:44.594 }, 00:16:44.594 { 00:16:44.594 "name": "pt2", 00:16:44.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.594 "is_configured": true, 00:16:44.594 "data_offset": 2048, 00:16:44.594 "data_size": 63488 00:16:44.594 }, 00:16:44.594 { 00:16:44.594 "name": "pt3", 00:16:44.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.594 "is_configured": true, 00:16:44.594 "data_offset": 2048, 00:16:44.594 "data_size": 63488 00:16:44.594 }, 00:16:44.594 { 00:16:44.594 "name": "pt4", 00:16:44.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.594 "is_configured": true, 00:16:44.594 "data_offset": 2048, 00:16:44.594 "data_size": 63488 00:16:44.594 } 00:16:44.594 ] 00:16:44.594 }' 00:16:44.854 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.854 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.114 [2024-09-28 16:17:59.683402] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:45.114 "name": "raid_bdev1", 00:16:45.114 "aliases": [ 00:16:45.114 "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c" 00:16:45.114 ], 00:16:45.114 "product_name": "Raid Volume", 00:16:45.114 "block_size": 512, 00:16:45.114 "num_blocks": 190464, 00:16:45.114 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:45.114 "assigned_rate_limits": { 00:16:45.114 "rw_ios_per_sec": 0, 00:16:45.114 "rw_mbytes_per_sec": 0, 00:16:45.114 "r_mbytes_per_sec": 0, 00:16:45.114 "w_mbytes_per_sec": 0 00:16:45.114 }, 00:16:45.114 "claimed": false, 00:16:45.114 "zoned": false, 00:16:45.114 "supported_io_types": { 00:16:45.114 "read": true, 00:16:45.114 "write": true, 00:16:45.114 "unmap": false, 00:16:45.114 "flush": false, 00:16:45.114 "reset": true, 00:16:45.114 "nvme_admin": false, 00:16:45.114 "nvme_io": false, 00:16:45.114 "nvme_io_md": false, 00:16:45.114 "write_zeroes": true, 00:16:45.114 "zcopy": false, 00:16:45.114 "get_zone_info": false, 00:16:45.114 "zone_management": false, 00:16:45.114 "zone_append": false, 00:16:45.114 "compare": false, 00:16:45.114 "compare_and_write": false, 00:16:45.114 "abort": false, 00:16:45.114 "seek_hole": false, 00:16:45.114 "seek_data": false, 00:16:45.114 "copy": false, 00:16:45.114 "nvme_iov_md": false 00:16:45.114 }, 00:16:45.114 "driver_specific": { 00:16:45.114 "raid": { 00:16:45.114 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:45.114 "strip_size_kb": 64, 00:16:45.114 "state": "online", 00:16:45.114 "raid_level": "raid5f", 00:16:45.114 "superblock": true, 00:16:45.114 "num_base_bdevs": 4, 00:16:45.114 "num_base_bdevs_discovered": 4, 00:16:45.114 "num_base_bdevs_operational": 4, 00:16:45.114 "base_bdevs_list": [ 00:16:45.114 { 00:16:45.114 "name": "pt1", 00:16:45.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:45.114 "is_configured": true, 00:16:45.114 "data_offset": 2048, 00:16:45.114 "data_size": 63488 00:16:45.114 }, 00:16:45.114 { 00:16:45.114 "name": "pt2", 00:16:45.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.114 "is_configured": true, 00:16:45.114 "data_offset": 2048, 00:16:45.114 "data_size": 63488 00:16:45.114 }, 00:16:45.114 { 00:16:45.114 "name": "pt3", 00:16:45.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.114 "is_configured": true, 00:16:45.114 "data_offset": 2048, 00:16:45.114 "data_size": 63488 00:16:45.114 }, 00:16:45.114 { 00:16:45.114 "name": "pt4", 00:16:45.114 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.114 "is_configured": true, 00:16:45.114 "data_offset": 2048, 00:16:45.114 "data_size": 63488 00:16:45.114 } 00:16:45.114 ] 00:16:45.114 } 00:16:45.114 } 00:16:45.114 }' 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:45.114 pt2 00:16:45.114 pt3 00:16:45.114 pt4' 00:16:45.114 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.374 16:17:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.374 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.374 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.374 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:45.374 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.374 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.374 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.374 [2024-09-28 16:18:00.030749] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.374 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c56e4eb9-0e0d-44fa-a48c-6d65c626d69c 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c56e4eb9-0e0d-44fa-a48c-6d65c626d69c ']' 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 [2024-09-28 16:18:00.074540] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.635 [2024-09-28 16:18:00.074605] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.635 [2024-09-28 16:18:00.074679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.635 [2024-09-28 16:18:00.074757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.635 [2024-09-28 16:18:00.074792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 [2024-09-28 16:18:00.238332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:45.635 [2024-09-28 16:18:00.239914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:45.635 [2024-09-28 16:18:00.239958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:45.635 [2024-09-28 16:18:00.239987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:45.635 [2024-09-28 16:18:00.240024] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:45.636 [2024-09-28 16:18:00.240061] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:45.636 [2024-09-28 16:18:00.240079] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:45.636 [2024-09-28 16:18:00.240095] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:45.636 [2024-09-28 16:18:00.240108] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.636 [2024-09-28 16:18:00.240118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:45.636 request: 00:16:45.636 { 00:16:45.636 "name": "raid_bdev1", 00:16:45.636 "raid_level": "raid5f", 00:16:45.636 "base_bdevs": [ 00:16:45.636 "malloc1", 00:16:45.636 "malloc2", 00:16:45.636 "malloc3", 00:16:45.636 "malloc4" 00:16:45.636 ], 00:16:45.636 "strip_size_kb": 64, 00:16:45.636 "superblock": false, 00:16:45.636 "method": "bdev_raid_create", 00:16:45.636 "req_id": 1 00:16:45.636 } 00:16:45.636 Got JSON-RPC error response 00:16:45.636 response: 00:16:45.636 { 00:16:45.636 "code": -17, 00:16:45.636 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:45.636 } 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.636 [2024-09-28 16:18:00.302325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:45.636 [2024-09-28 16:18:00.302413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.636 [2024-09-28 16:18:00.302442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:45.636 [2024-09-28 16:18:00.302470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.636 [2024-09-28 16:18:00.304372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.636 [2024-09-28 16:18:00.304444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:45.636 [2024-09-28 16:18:00.304517] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:45.636 [2024-09-28 16:18:00.304584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:45.636 pt1 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.636 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.896 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.896 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.896 "name": "raid_bdev1", 00:16:45.896 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:45.896 "strip_size_kb": 64, 00:16:45.896 "state": "configuring", 00:16:45.896 "raid_level": "raid5f", 00:16:45.896 "superblock": true, 00:16:45.896 "num_base_bdevs": 4, 00:16:45.896 "num_base_bdevs_discovered": 1, 00:16:45.896 "num_base_bdevs_operational": 4, 00:16:45.896 "base_bdevs_list": [ 00:16:45.896 { 00:16:45.896 "name": "pt1", 00:16:45.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:45.896 "is_configured": true, 00:16:45.896 "data_offset": 2048, 00:16:45.896 "data_size": 63488 00:16:45.896 }, 00:16:45.896 { 00:16:45.896 "name": null, 00:16:45.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.896 "is_configured": false, 00:16:45.896 "data_offset": 2048, 00:16:45.896 "data_size": 63488 00:16:45.896 }, 00:16:45.896 { 00:16:45.896 "name": null, 00:16:45.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.896 "is_configured": false, 00:16:45.896 "data_offset": 2048, 00:16:45.896 "data_size": 63488 00:16:45.896 }, 00:16:45.896 { 00:16:45.896 "name": null, 00:16:45.896 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.896 "is_configured": false, 00:16:45.896 "data_offset": 2048, 00:16:45.896 "data_size": 63488 00:16:45.896 } 00:16:45.896 ] 00:16:45.896 }' 00:16:45.896 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.896 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.157 [2024-09-28 16:18:00.737590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:46.157 [2024-09-28 16:18:00.737635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.157 [2024-09-28 16:18:00.737649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:46.157 [2024-09-28 16:18:00.737658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.157 [2024-09-28 16:18:00.737968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.157 [2024-09-28 16:18:00.737986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:46.157 [2024-09-28 16:18:00.738036] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:46.157 [2024-09-28 16:18:00.738053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.157 pt2 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.157 [2024-09-28 16:18:00.749592] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.157 "name": "raid_bdev1", 00:16:46.157 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:46.157 "strip_size_kb": 64, 00:16:46.157 "state": "configuring", 00:16:46.157 "raid_level": "raid5f", 00:16:46.157 "superblock": true, 00:16:46.157 "num_base_bdevs": 4, 00:16:46.157 "num_base_bdevs_discovered": 1, 00:16:46.157 "num_base_bdevs_operational": 4, 00:16:46.157 "base_bdevs_list": [ 00:16:46.157 { 00:16:46.157 "name": "pt1", 00:16:46.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:46.157 "is_configured": true, 00:16:46.157 "data_offset": 2048, 00:16:46.157 "data_size": 63488 00:16:46.157 }, 00:16:46.157 { 00:16:46.157 "name": null, 00:16:46.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.157 "is_configured": false, 00:16:46.157 "data_offset": 0, 00:16:46.157 "data_size": 63488 00:16:46.157 }, 00:16:46.157 { 00:16:46.157 "name": null, 00:16:46.157 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.157 "is_configured": false, 00:16:46.157 "data_offset": 2048, 00:16:46.157 "data_size": 63488 00:16:46.157 }, 00:16:46.157 { 00:16:46.157 "name": null, 00:16:46.157 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.157 "is_configured": false, 00:16:46.157 "data_offset": 2048, 00:16:46.157 "data_size": 63488 00:16:46.157 } 00:16:46.157 ] 00:16:46.157 }' 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.157 16:18:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.727 [2024-09-28 16:18:01.256687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:46.727 [2024-09-28 16:18:01.256774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.727 [2024-09-28 16:18:01.256805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:46.727 [2024-09-28 16:18:01.256830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.727 [2024-09-28 16:18:01.257155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.727 [2024-09-28 16:18:01.257206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:46.727 [2024-09-28 16:18:01.257310] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:46.727 [2024-09-28 16:18:01.257371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.727 pt2 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.727 [2024-09-28 16:18:01.268668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:46.727 [2024-09-28 16:18:01.268749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.727 [2024-09-28 16:18:01.268778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:46.727 [2024-09-28 16:18:01.268803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.727 [2024-09-28 16:18:01.269112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.727 [2024-09-28 16:18:01.269162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:46.727 [2024-09-28 16:18:01.269247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:46.727 [2024-09-28 16:18:01.269297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:46.727 pt3 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.727 [2024-09-28 16:18:01.280628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:46.727 [2024-09-28 16:18:01.280672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.727 [2024-09-28 16:18:01.280687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:46.727 [2024-09-28 16:18:01.280694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.727 [2024-09-28 16:18:01.281006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.727 [2024-09-28 16:18:01.281020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:46.727 [2024-09-28 16:18:01.281067] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:46.727 [2024-09-28 16:18:01.281087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:46.727 [2024-09-28 16:18:01.281209] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:46.727 [2024-09-28 16:18:01.281216] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:46.727 [2024-09-28 16:18:01.281491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:46.727 [2024-09-28 16:18:01.288074] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:46.727 [2024-09-28 16:18:01.288096] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:46.727 [2024-09-28 16:18:01.288252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.727 pt4 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.727 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.728 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.728 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.728 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.728 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.728 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.728 "name": "raid_bdev1", 00:16:46.728 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:46.728 "strip_size_kb": 64, 00:16:46.728 "state": "online", 00:16:46.728 "raid_level": "raid5f", 00:16:46.728 "superblock": true, 00:16:46.728 "num_base_bdevs": 4, 00:16:46.728 "num_base_bdevs_discovered": 4, 00:16:46.728 "num_base_bdevs_operational": 4, 00:16:46.728 "base_bdevs_list": [ 00:16:46.728 { 00:16:46.728 "name": "pt1", 00:16:46.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:46.728 "is_configured": true, 00:16:46.728 "data_offset": 2048, 00:16:46.728 "data_size": 63488 00:16:46.728 }, 00:16:46.728 { 00:16:46.728 "name": "pt2", 00:16:46.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.728 "is_configured": true, 00:16:46.728 "data_offset": 2048, 00:16:46.728 "data_size": 63488 00:16:46.728 }, 00:16:46.728 { 00:16:46.728 "name": "pt3", 00:16:46.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.728 "is_configured": true, 00:16:46.728 "data_offset": 2048, 00:16:46.728 "data_size": 63488 00:16:46.728 }, 00:16:46.728 { 00:16:46.728 "name": "pt4", 00:16:46.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.728 "is_configured": true, 00:16:46.728 "data_offset": 2048, 00:16:46.728 "data_size": 63488 00:16:46.728 } 00:16:46.728 ] 00:16:46.728 }' 00:16:46.728 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.728 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.297 [2024-09-28 16:18:01.743388] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.297 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.297 "name": "raid_bdev1", 00:16:47.297 "aliases": [ 00:16:47.297 "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c" 00:16:47.297 ], 00:16:47.297 "product_name": "Raid Volume", 00:16:47.297 "block_size": 512, 00:16:47.297 "num_blocks": 190464, 00:16:47.297 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:47.297 "assigned_rate_limits": { 00:16:47.297 "rw_ios_per_sec": 0, 00:16:47.297 "rw_mbytes_per_sec": 0, 00:16:47.297 "r_mbytes_per_sec": 0, 00:16:47.297 "w_mbytes_per_sec": 0 00:16:47.297 }, 00:16:47.297 "claimed": false, 00:16:47.297 "zoned": false, 00:16:47.297 "supported_io_types": { 00:16:47.297 "read": true, 00:16:47.297 "write": true, 00:16:47.297 "unmap": false, 00:16:47.297 "flush": false, 00:16:47.297 "reset": true, 00:16:47.297 "nvme_admin": false, 00:16:47.297 "nvme_io": false, 00:16:47.297 "nvme_io_md": false, 00:16:47.297 "write_zeroes": true, 00:16:47.297 "zcopy": false, 00:16:47.297 "get_zone_info": false, 00:16:47.297 "zone_management": false, 00:16:47.297 "zone_append": false, 00:16:47.297 "compare": false, 00:16:47.297 "compare_and_write": false, 00:16:47.297 "abort": false, 00:16:47.297 "seek_hole": false, 00:16:47.297 "seek_data": false, 00:16:47.297 "copy": false, 00:16:47.297 "nvme_iov_md": false 00:16:47.297 }, 00:16:47.297 "driver_specific": { 00:16:47.297 "raid": { 00:16:47.297 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:47.297 "strip_size_kb": 64, 00:16:47.297 "state": "online", 00:16:47.297 "raid_level": "raid5f", 00:16:47.297 "superblock": true, 00:16:47.297 "num_base_bdevs": 4, 00:16:47.297 "num_base_bdevs_discovered": 4, 00:16:47.297 "num_base_bdevs_operational": 4, 00:16:47.297 "base_bdevs_list": [ 00:16:47.297 { 00:16:47.297 "name": "pt1", 00:16:47.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.297 "is_configured": true, 00:16:47.297 "data_offset": 2048, 00:16:47.297 "data_size": 63488 00:16:47.297 }, 00:16:47.297 { 00:16:47.297 "name": "pt2", 00:16:47.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.297 "is_configured": true, 00:16:47.297 "data_offset": 2048, 00:16:47.297 "data_size": 63488 00:16:47.297 }, 00:16:47.297 { 00:16:47.297 "name": "pt3", 00:16:47.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.297 "is_configured": true, 00:16:47.297 "data_offset": 2048, 00:16:47.297 "data_size": 63488 00:16:47.297 }, 00:16:47.297 { 00:16:47.297 "name": "pt4", 00:16:47.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:47.297 "is_configured": true, 00:16:47.297 "data_offset": 2048, 00:16:47.297 "data_size": 63488 00:16:47.297 } 00:16:47.297 ] 00:16:47.297 } 00:16:47.298 } 00:16:47.298 }' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:47.298 pt2 00:16:47.298 pt3 00:16:47.298 pt4' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.298 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.558 16:18:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.558 [2024-09-28 16:18:02.062773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c56e4eb9-0e0d-44fa-a48c-6d65c626d69c '!=' c56e4eb9-0e0d-44fa-a48c-6d65c626d69c ']' 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.558 [2024-09-28 16:18:02.106580] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.558 "name": "raid_bdev1", 00:16:47.558 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:47.558 "strip_size_kb": 64, 00:16:47.558 "state": "online", 00:16:47.558 "raid_level": "raid5f", 00:16:47.558 "superblock": true, 00:16:47.558 "num_base_bdevs": 4, 00:16:47.558 "num_base_bdevs_discovered": 3, 00:16:47.558 "num_base_bdevs_operational": 3, 00:16:47.558 "base_bdevs_list": [ 00:16:47.558 { 00:16:47.558 "name": null, 00:16:47.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.558 "is_configured": false, 00:16:47.558 "data_offset": 0, 00:16:47.558 "data_size": 63488 00:16:47.558 }, 00:16:47.558 { 00:16:47.558 "name": "pt2", 00:16:47.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.558 "is_configured": true, 00:16:47.558 "data_offset": 2048, 00:16:47.558 "data_size": 63488 00:16:47.558 }, 00:16:47.558 { 00:16:47.558 "name": "pt3", 00:16:47.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.558 "is_configured": true, 00:16:47.558 "data_offset": 2048, 00:16:47.558 "data_size": 63488 00:16:47.558 }, 00:16:47.558 { 00:16:47.558 "name": "pt4", 00:16:47.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:47.558 "is_configured": true, 00:16:47.558 "data_offset": 2048, 00:16:47.558 "data_size": 63488 00:16:47.558 } 00:16:47.558 ] 00:16:47.558 }' 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.558 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.128 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.128 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.128 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.128 [2024-09-28 16:18:02.537836] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.129 [2024-09-28 16:18:02.537863] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.129 [2024-09-28 16:18:02.537909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.129 [2024-09-28 16:18:02.537963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.129 [2024-09-28 16:18:02.537970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.129 [2024-09-28 16:18:02.617662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.129 [2024-09-28 16:18:02.617703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.129 [2024-09-28 16:18:02.617717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:48.129 [2024-09-28 16:18:02.617725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.129 [2024-09-28 16:18:02.619591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.129 [2024-09-28 16:18:02.619671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.129 [2024-09-28 16:18:02.619734] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:48.129 [2024-09-28 16:18:02.619771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.129 pt2 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.129 "name": "raid_bdev1", 00:16:48.129 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:48.129 "strip_size_kb": 64, 00:16:48.129 "state": "configuring", 00:16:48.129 "raid_level": "raid5f", 00:16:48.129 "superblock": true, 00:16:48.129 "num_base_bdevs": 4, 00:16:48.129 "num_base_bdevs_discovered": 1, 00:16:48.129 "num_base_bdevs_operational": 3, 00:16:48.129 "base_bdevs_list": [ 00:16:48.129 { 00:16:48.129 "name": null, 00:16:48.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.129 "is_configured": false, 00:16:48.129 "data_offset": 2048, 00:16:48.129 "data_size": 63488 00:16:48.129 }, 00:16:48.129 { 00:16:48.129 "name": "pt2", 00:16:48.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.129 "is_configured": true, 00:16:48.129 "data_offset": 2048, 00:16:48.129 "data_size": 63488 00:16:48.129 }, 00:16:48.129 { 00:16:48.129 "name": null, 00:16:48.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.129 "is_configured": false, 00:16:48.129 "data_offset": 2048, 00:16:48.129 "data_size": 63488 00:16:48.129 }, 00:16:48.129 { 00:16:48.129 "name": null, 00:16:48.129 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.129 "is_configured": false, 00:16:48.129 "data_offset": 2048, 00:16:48.129 "data_size": 63488 00:16:48.129 } 00:16:48.129 ] 00:16:48.129 }' 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.129 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.389 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:48.389 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:48.389 16:18:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:48.389 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.389 16:18:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.389 [2024-09-28 16:18:03.004997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:48.389 [2024-09-28 16:18:03.005083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.389 [2024-09-28 16:18:03.005111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:48.389 [2024-09-28 16:18:03.005134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.389 [2024-09-28 16:18:03.005469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.389 [2024-09-28 16:18:03.005522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:48.389 [2024-09-28 16:18:03.005597] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:48.389 [2024-09-28 16:18:03.005648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:48.389 pt3 00:16:48.389 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.389 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:48.389 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.389 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.390 "name": "raid_bdev1", 00:16:48.390 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:48.390 "strip_size_kb": 64, 00:16:48.390 "state": "configuring", 00:16:48.390 "raid_level": "raid5f", 00:16:48.390 "superblock": true, 00:16:48.390 "num_base_bdevs": 4, 00:16:48.390 "num_base_bdevs_discovered": 2, 00:16:48.390 "num_base_bdevs_operational": 3, 00:16:48.390 "base_bdevs_list": [ 00:16:48.390 { 00:16:48.390 "name": null, 00:16:48.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.390 "is_configured": false, 00:16:48.390 "data_offset": 2048, 00:16:48.390 "data_size": 63488 00:16:48.390 }, 00:16:48.390 { 00:16:48.390 "name": "pt2", 00:16:48.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.390 "is_configured": true, 00:16:48.390 "data_offset": 2048, 00:16:48.390 "data_size": 63488 00:16:48.390 }, 00:16:48.390 { 00:16:48.390 "name": "pt3", 00:16:48.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.390 "is_configured": true, 00:16:48.390 "data_offset": 2048, 00:16:48.390 "data_size": 63488 00:16:48.390 }, 00:16:48.390 { 00:16:48.390 "name": null, 00:16:48.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.390 "is_configured": false, 00:16:48.390 "data_offset": 2048, 00:16:48.390 "data_size": 63488 00:16:48.390 } 00:16:48.390 ] 00:16:48.390 }' 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.390 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.959 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.960 [2024-09-28 16:18:03.480188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:48.960 [2024-09-28 16:18:03.480239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.960 [2024-09-28 16:18:03.480255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:48.960 [2024-09-28 16:18:03.480262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.960 [2024-09-28 16:18:03.480566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.960 [2024-09-28 16:18:03.480592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:48.960 [2024-09-28 16:18:03.480643] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:48.960 [2024-09-28 16:18:03.480657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:48.960 [2024-09-28 16:18:03.480762] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:48.960 [2024-09-28 16:18:03.480769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:48.960 [2024-09-28 16:18:03.480971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:48.960 [2024-09-28 16:18:03.487026] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:48.960 [2024-09-28 16:18:03.487050] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:48.960 [2024-09-28 16:18:03.487294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.960 pt4 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.960 "name": "raid_bdev1", 00:16:48.960 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:48.960 "strip_size_kb": 64, 00:16:48.960 "state": "online", 00:16:48.960 "raid_level": "raid5f", 00:16:48.960 "superblock": true, 00:16:48.960 "num_base_bdevs": 4, 00:16:48.960 "num_base_bdevs_discovered": 3, 00:16:48.960 "num_base_bdevs_operational": 3, 00:16:48.960 "base_bdevs_list": [ 00:16:48.960 { 00:16:48.960 "name": null, 00:16:48.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.960 "is_configured": false, 00:16:48.960 "data_offset": 2048, 00:16:48.960 "data_size": 63488 00:16:48.960 }, 00:16:48.960 { 00:16:48.960 "name": "pt2", 00:16:48.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.960 "is_configured": true, 00:16:48.960 "data_offset": 2048, 00:16:48.960 "data_size": 63488 00:16:48.960 }, 00:16:48.960 { 00:16:48.960 "name": "pt3", 00:16:48.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.960 "is_configured": true, 00:16:48.960 "data_offset": 2048, 00:16:48.960 "data_size": 63488 00:16:48.960 }, 00:16:48.960 { 00:16:48.960 "name": "pt4", 00:16:48.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.960 "is_configured": true, 00:16:48.960 "data_offset": 2048, 00:16:48.960 "data_size": 63488 00:16:48.960 } 00:16:48.960 ] 00:16:48.960 }' 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.960 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.220 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.220 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.220 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.479 [2024-09-28 16:18:03.905947] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.480 [2024-09-28 16:18:03.905971] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.480 [2024-09-28 16:18:03.906018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.480 [2024-09-28 16:18:03.906074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.480 [2024-09-28 16:18:03.906085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 [2024-09-28 16:18:03.977830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.480 [2024-09-28 16:18:03.977887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.480 [2024-09-28 16:18:03.977901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:49.480 [2024-09-28 16:18:03.977910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.480 [2024-09-28 16:18:03.979920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.480 [2024-09-28 16:18:03.979958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.480 [2024-09-28 16:18:03.980012] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:49.480 [2024-09-28 16:18:03.980052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.480 [2024-09-28 16:18:03.980159] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:49.480 [2024-09-28 16:18:03.980172] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.480 [2024-09-28 16:18:03.980184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:49.480 [2024-09-28 16:18:03.980254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.480 [2024-09-28 16:18:03.980355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:49.480 pt1 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 16:18:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.480 "name": "raid_bdev1", 00:16:49.480 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:49.480 "strip_size_kb": 64, 00:16:49.480 "state": "configuring", 00:16:49.480 "raid_level": "raid5f", 00:16:49.480 "superblock": true, 00:16:49.480 "num_base_bdevs": 4, 00:16:49.480 "num_base_bdevs_discovered": 2, 00:16:49.480 "num_base_bdevs_operational": 3, 00:16:49.480 "base_bdevs_list": [ 00:16:49.480 { 00:16:49.480 "name": null, 00:16:49.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.480 "is_configured": false, 00:16:49.480 "data_offset": 2048, 00:16:49.480 "data_size": 63488 00:16:49.480 }, 00:16:49.480 { 00:16:49.480 "name": "pt2", 00:16:49.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.480 "is_configured": true, 00:16:49.480 "data_offset": 2048, 00:16:49.480 "data_size": 63488 00:16:49.480 }, 00:16:49.480 { 00:16:49.480 "name": "pt3", 00:16:49.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.480 "is_configured": true, 00:16:49.480 "data_offset": 2048, 00:16:49.480 "data_size": 63488 00:16:49.480 }, 00:16:49.480 { 00:16:49.480 "name": null, 00:16:49.480 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:49.480 "is_configured": false, 00:16:49.480 "data_offset": 2048, 00:16:49.480 "data_size": 63488 00:16:49.480 } 00:16:49.480 ] 00:16:49.480 }' 00:16:49.480 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.480 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.049 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:50.049 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:50.049 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.049 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.049 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.049 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.050 [2024-09-28 16:18:04.481028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:50.050 [2024-09-28 16:18:04.481124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.050 [2024-09-28 16:18:04.481158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:50.050 [2024-09-28 16:18:04.481185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.050 [2024-09-28 16:18:04.481521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.050 [2024-09-28 16:18:04.481581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:50.050 [2024-09-28 16:18:04.481661] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:50.050 [2024-09-28 16:18:04.481704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:50.050 [2024-09-28 16:18:04.481839] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:50.050 [2024-09-28 16:18:04.481873] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.050 [2024-09-28 16:18:04.482095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:50.050 [2024-09-28 16:18:04.488999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:50.050 [2024-09-28 16:18:04.489057] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:50.050 [2024-09-28 16:18:04.489306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.050 pt4 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.050 "name": "raid_bdev1", 00:16:50.050 "uuid": "c56e4eb9-0e0d-44fa-a48c-6d65c626d69c", 00:16:50.050 "strip_size_kb": 64, 00:16:50.050 "state": "online", 00:16:50.050 "raid_level": "raid5f", 00:16:50.050 "superblock": true, 00:16:50.050 "num_base_bdevs": 4, 00:16:50.050 "num_base_bdevs_discovered": 3, 00:16:50.050 "num_base_bdevs_operational": 3, 00:16:50.050 "base_bdevs_list": [ 00:16:50.050 { 00:16:50.050 "name": null, 00:16:50.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.050 "is_configured": false, 00:16:50.050 "data_offset": 2048, 00:16:50.050 "data_size": 63488 00:16:50.050 }, 00:16:50.050 { 00:16:50.050 "name": "pt2", 00:16:50.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.050 "is_configured": true, 00:16:50.050 "data_offset": 2048, 00:16:50.050 "data_size": 63488 00:16:50.050 }, 00:16:50.050 { 00:16:50.050 "name": "pt3", 00:16:50.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.050 "is_configured": true, 00:16:50.050 "data_offset": 2048, 00:16:50.050 "data_size": 63488 00:16:50.050 }, 00:16:50.050 { 00:16:50.050 "name": "pt4", 00:16:50.050 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.050 "is_configured": true, 00:16:50.050 "data_offset": 2048, 00:16:50.050 "data_size": 63488 00:16:50.050 } 00:16:50.050 ] 00:16:50.050 }' 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.050 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.620 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:50.620 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.620 16:18:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:50.620 16:18:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.620 [2024-09-28 16:18:05.048064] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c56e4eb9-0e0d-44fa-a48c-6d65c626d69c '!=' c56e4eb9-0e0d-44fa-a48c-6d65c626d69c ']' 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84119 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84119 ']' 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84119 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84119 00:16:50.620 killing process with pid 84119 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84119' 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84119 00:16:50.620 [2024-09-28 16:18:05.137220] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.620 [2024-09-28 16:18:05.137290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.620 [2024-09-28 16:18:05.137342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.620 [2024-09-28 16:18:05.137352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:50.620 16:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84119 00:16:50.880 [2024-09-28 16:18:05.508934] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.261 16:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:52.261 00:16:52.261 real 0m8.645s 00:16:52.261 user 0m13.534s 00:16:52.261 sys 0m1.644s 00:16:52.261 ************************************ 00:16:52.261 END TEST raid5f_superblock_test 00:16:52.261 ************************************ 00:16:52.261 16:18:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.261 16:18:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.261 16:18:06 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:52.261 16:18:06 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:52.261 16:18:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:52.261 16:18:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.261 16:18:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.261 ************************************ 00:16:52.261 START TEST raid5f_rebuild_test 00:16:52.261 ************************************ 00:16:52.261 16:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:52.261 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:52.261 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:52.261 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:52.261 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:52.261 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:52.261 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84600 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84600 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84600 ']' 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.262 16:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.262 [2024-09-28 16:18:06.865949] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:52.262 [2024-09-28 16:18:06.866150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:52.262 Zero copy mechanism will not be used. 00:16:52.262 -allocations --file-prefix=spdk_pid84600 ] 00:16:52.522 [2024-09-28 16:18:07.034370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.782 [2024-09-28 16:18:07.227510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.782 [2024-09-28 16:18:07.388757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.782 [2024-09-28 16:18:07.388869] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.042 BaseBdev1_malloc 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.042 [2024-09-28 16:18:07.703787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:53.042 [2024-09-28 16:18:07.703947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.042 [2024-09-28 16:18:07.703972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:53.042 [2024-09-28 16:18:07.703985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.042 [2024-09-28 16:18:07.705866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.042 [2024-09-28 16:18:07.705903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:53.042 BaseBdev1 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.042 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.303 BaseBdev2_malloc 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.303 [2024-09-28 16:18:07.787097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:53.303 [2024-09-28 16:18:07.787153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.303 [2024-09-28 16:18:07.787169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:53.303 [2024-09-28 16:18:07.787180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.303 [2024-09-28 16:18:07.789065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.303 [2024-09-28 16:18:07.789102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:53.303 BaseBdev2 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.303 BaseBdev3_malloc 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.303 [2024-09-28 16:18:07.836668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:53.303 [2024-09-28 16:18:07.836718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.303 [2024-09-28 16:18:07.836736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:53.303 [2024-09-28 16:18:07.836745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.303 [2024-09-28 16:18:07.838577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.303 [2024-09-28 16:18:07.838693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:53.303 BaseBdev3 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.303 BaseBdev4_malloc 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.303 [2024-09-28 16:18:07.889104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:53.303 [2024-09-28 16:18:07.889156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.303 [2024-09-28 16:18:07.889174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:53.303 [2024-09-28 16:18:07.889183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.303 [2024-09-28 16:18:07.891051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.303 [2024-09-28 16:18:07.891090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:53.303 BaseBdev4 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.303 spare_malloc 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.303 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.304 spare_delay 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.304 [2024-09-28 16:18:07.956820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.304 [2024-09-28 16:18:07.956951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.304 [2024-09-28 16:18:07.956973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:53.304 [2024-09-28 16:18:07.956983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.304 [2024-09-28 16:18:07.958829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.304 [2024-09-28 16:18:07.958866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.304 spare 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.304 [2024-09-28 16:18:07.968859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.304 [2024-09-28 16:18:07.970449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.304 [2024-09-28 16:18:07.970507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.304 [2024-09-28 16:18:07.970551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:53.304 [2024-09-28 16:18:07.970630] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:53.304 [2024-09-28 16:18:07.970640] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:53.304 [2024-09-28 16:18:07.970858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:53.304 [2024-09-28 16:18:07.977966] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:53.304 [2024-09-28 16:18:07.977986] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:53.304 [2024-09-28 16:18:07.978147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.304 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.566 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.566 16:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.566 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.566 16:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.566 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.566 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.566 "name": "raid_bdev1", 00:16:53.566 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:53.566 "strip_size_kb": 64, 00:16:53.566 "state": "online", 00:16:53.566 "raid_level": "raid5f", 00:16:53.566 "superblock": false, 00:16:53.566 "num_base_bdevs": 4, 00:16:53.566 "num_base_bdevs_discovered": 4, 00:16:53.566 "num_base_bdevs_operational": 4, 00:16:53.566 "base_bdevs_list": [ 00:16:53.566 { 00:16:53.566 "name": "BaseBdev1", 00:16:53.566 "uuid": "f708a021-7bc5-5610-af83-7cf886f2d660", 00:16:53.566 "is_configured": true, 00:16:53.566 "data_offset": 0, 00:16:53.566 "data_size": 65536 00:16:53.566 }, 00:16:53.566 { 00:16:53.566 "name": "BaseBdev2", 00:16:53.566 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:53.566 "is_configured": true, 00:16:53.566 "data_offset": 0, 00:16:53.566 "data_size": 65536 00:16:53.566 }, 00:16:53.566 { 00:16:53.566 "name": "BaseBdev3", 00:16:53.566 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:53.566 "is_configured": true, 00:16:53.566 "data_offset": 0, 00:16:53.566 "data_size": 65536 00:16:53.566 }, 00:16:53.566 { 00:16:53.566 "name": "BaseBdev4", 00:16:53.566 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:53.566 "is_configured": true, 00:16:53.566 "data_offset": 0, 00:16:53.566 "data_size": 65536 00:16:53.566 } 00:16:53.566 ] 00:16:53.566 }' 00:16:53.566 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.566 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 [2024-09-28 16:18:08.405449] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:53.826 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:54.086 [2024-09-28 16:18:08.668908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:54.086 /dev/nbd0 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:54.086 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.087 1+0 records in 00:16:54.087 1+0 records out 00:16:54.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284531 s, 14.4 MB/s 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:54.087 16:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:54.657 512+0 records in 00:16:54.657 512+0 records out 00:16:54.657 100663296 bytes (101 MB, 96 MiB) copied, 0.478842 s, 210 MB/s 00:16:54.657 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:54.657 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.657 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:54.657 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:54.657 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:54.657 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.657 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:54.917 [2024-09-28 16:18:09.406663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.917 [2024-09-28 16:18:09.442605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.917 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.917 "name": "raid_bdev1", 00:16:54.917 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:54.917 "strip_size_kb": 64, 00:16:54.917 "state": "online", 00:16:54.917 "raid_level": "raid5f", 00:16:54.917 "superblock": false, 00:16:54.917 "num_base_bdevs": 4, 00:16:54.917 "num_base_bdevs_discovered": 3, 00:16:54.917 "num_base_bdevs_operational": 3, 00:16:54.917 "base_bdevs_list": [ 00:16:54.917 { 00:16:54.918 "name": null, 00:16:54.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.918 "is_configured": false, 00:16:54.918 "data_offset": 0, 00:16:54.918 "data_size": 65536 00:16:54.918 }, 00:16:54.918 { 00:16:54.918 "name": "BaseBdev2", 00:16:54.918 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:54.918 "is_configured": true, 00:16:54.918 "data_offset": 0, 00:16:54.918 "data_size": 65536 00:16:54.918 }, 00:16:54.918 { 00:16:54.918 "name": "BaseBdev3", 00:16:54.918 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:54.918 "is_configured": true, 00:16:54.918 "data_offset": 0, 00:16:54.918 "data_size": 65536 00:16:54.918 }, 00:16:54.918 { 00:16:54.918 "name": "BaseBdev4", 00:16:54.918 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:54.918 "is_configured": true, 00:16:54.918 "data_offset": 0, 00:16:54.918 "data_size": 65536 00:16:54.918 } 00:16:54.918 ] 00:16:54.918 }' 00:16:54.918 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.918 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.488 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:55.488 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.488 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.488 [2024-09-28 16:18:09.873823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.488 [2024-09-28 16:18:09.885580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:55.488 16:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.488 16:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:55.488 [2024-09-28 16:18:09.893832] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:56.426 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.426 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.426 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.426 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.426 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.426 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.426 16:18:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.427 16:18:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.427 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.427 16:18:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.427 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.427 "name": "raid_bdev1", 00:16:56.427 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:56.427 "strip_size_kb": 64, 00:16:56.427 "state": "online", 00:16:56.427 "raid_level": "raid5f", 00:16:56.427 "superblock": false, 00:16:56.427 "num_base_bdevs": 4, 00:16:56.427 "num_base_bdevs_discovered": 4, 00:16:56.427 "num_base_bdevs_operational": 4, 00:16:56.427 "process": { 00:16:56.427 "type": "rebuild", 00:16:56.427 "target": "spare", 00:16:56.427 "progress": { 00:16:56.427 "blocks": 19200, 00:16:56.427 "percent": 9 00:16:56.427 } 00:16:56.427 }, 00:16:56.427 "base_bdevs_list": [ 00:16:56.427 { 00:16:56.427 "name": "spare", 00:16:56.427 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:16:56.427 "is_configured": true, 00:16:56.427 "data_offset": 0, 00:16:56.427 "data_size": 65536 00:16:56.427 }, 00:16:56.427 { 00:16:56.427 "name": "BaseBdev2", 00:16:56.427 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:56.427 "is_configured": true, 00:16:56.427 "data_offset": 0, 00:16:56.427 "data_size": 65536 00:16:56.427 }, 00:16:56.427 { 00:16:56.427 "name": "BaseBdev3", 00:16:56.427 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:56.427 "is_configured": true, 00:16:56.427 "data_offset": 0, 00:16:56.427 "data_size": 65536 00:16:56.427 }, 00:16:56.427 { 00:16:56.427 "name": "BaseBdev4", 00:16:56.427 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:56.427 "is_configured": true, 00:16:56.427 "data_offset": 0, 00:16:56.427 "data_size": 65536 00:16:56.427 } 00:16:56.427 ] 00:16:56.427 }' 00:16:56.427 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.427 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.427 16:18:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.427 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.427 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:56.427 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.427 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.427 [2024-09-28 16:18:11.028320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.427 [2024-09-28 16:18:11.098994] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:56.427 [2024-09-28 16:18:11.099099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.427 [2024-09-28 16:18:11.099135] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.427 [2024-09-28 16:18:11.099157] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.687 "name": "raid_bdev1", 00:16:56.687 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:56.687 "strip_size_kb": 64, 00:16:56.687 "state": "online", 00:16:56.687 "raid_level": "raid5f", 00:16:56.687 "superblock": false, 00:16:56.687 "num_base_bdevs": 4, 00:16:56.687 "num_base_bdevs_discovered": 3, 00:16:56.687 "num_base_bdevs_operational": 3, 00:16:56.687 "base_bdevs_list": [ 00:16:56.687 { 00:16:56.687 "name": null, 00:16:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.687 "is_configured": false, 00:16:56.687 "data_offset": 0, 00:16:56.687 "data_size": 65536 00:16:56.687 }, 00:16:56.687 { 00:16:56.687 "name": "BaseBdev2", 00:16:56.687 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:56.687 "is_configured": true, 00:16:56.687 "data_offset": 0, 00:16:56.687 "data_size": 65536 00:16:56.687 }, 00:16:56.687 { 00:16:56.687 "name": "BaseBdev3", 00:16:56.687 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:56.687 "is_configured": true, 00:16:56.687 "data_offset": 0, 00:16:56.687 "data_size": 65536 00:16:56.687 }, 00:16:56.687 { 00:16:56.687 "name": "BaseBdev4", 00:16:56.687 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:56.687 "is_configured": true, 00:16:56.687 "data_offset": 0, 00:16:56.687 "data_size": 65536 00:16:56.687 } 00:16:56.687 ] 00:16:56.687 }' 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.687 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.947 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.207 "name": "raid_bdev1", 00:16:57.207 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:57.207 "strip_size_kb": 64, 00:16:57.207 "state": "online", 00:16:57.207 "raid_level": "raid5f", 00:16:57.207 "superblock": false, 00:16:57.207 "num_base_bdevs": 4, 00:16:57.207 "num_base_bdevs_discovered": 3, 00:16:57.207 "num_base_bdevs_operational": 3, 00:16:57.207 "base_bdevs_list": [ 00:16:57.207 { 00:16:57.207 "name": null, 00:16:57.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.207 "is_configured": false, 00:16:57.207 "data_offset": 0, 00:16:57.207 "data_size": 65536 00:16:57.207 }, 00:16:57.207 { 00:16:57.207 "name": "BaseBdev2", 00:16:57.207 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:57.207 "is_configured": true, 00:16:57.207 "data_offset": 0, 00:16:57.207 "data_size": 65536 00:16:57.207 }, 00:16:57.207 { 00:16:57.207 "name": "BaseBdev3", 00:16:57.207 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:57.207 "is_configured": true, 00:16:57.207 "data_offset": 0, 00:16:57.207 "data_size": 65536 00:16:57.207 }, 00:16:57.207 { 00:16:57.207 "name": "BaseBdev4", 00:16:57.207 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:57.207 "is_configured": true, 00:16:57.207 "data_offset": 0, 00:16:57.207 "data_size": 65536 00:16:57.207 } 00:16:57.207 ] 00:16:57.207 }' 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.207 [2024-09-28 16:18:11.754206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.207 [2024-09-28 16:18:11.766989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.207 16:18:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:57.207 [2024-09-28 16:18:11.775750] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.147 "name": "raid_bdev1", 00:16:58.147 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:58.147 "strip_size_kb": 64, 00:16:58.147 "state": "online", 00:16:58.147 "raid_level": "raid5f", 00:16:58.147 "superblock": false, 00:16:58.147 "num_base_bdevs": 4, 00:16:58.147 "num_base_bdevs_discovered": 4, 00:16:58.147 "num_base_bdevs_operational": 4, 00:16:58.147 "process": { 00:16:58.147 "type": "rebuild", 00:16:58.147 "target": "spare", 00:16:58.147 "progress": { 00:16:58.147 "blocks": 19200, 00:16:58.147 "percent": 9 00:16:58.147 } 00:16:58.147 }, 00:16:58.147 "base_bdevs_list": [ 00:16:58.147 { 00:16:58.147 "name": "spare", 00:16:58.147 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:16:58.147 "is_configured": true, 00:16:58.147 "data_offset": 0, 00:16:58.147 "data_size": 65536 00:16:58.147 }, 00:16:58.147 { 00:16:58.147 "name": "BaseBdev2", 00:16:58.147 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:58.147 "is_configured": true, 00:16:58.147 "data_offset": 0, 00:16:58.147 "data_size": 65536 00:16:58.147 }, 00:16:58.147 { 00:16:58.147 "name": "BaseBdev3", 00:16:58.147 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:58.147 "is_configured": true, 00:16:58.147 "data_offset": 0, 00:16:58.147 "data_size": 65536 00:16:58.147 }, 00:16:58.147 { 00:16:58.147 "name": "BaseBdev4", 00:16:58.147 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:58.147 "is_configured": true, 00:16:58.147 "data_offset": 0, 00:16:58.147 "data_size": 65536 00:16:58.147 } 00:16:58.147 ] 00:16:58.147 }' 00:16:58.147 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=625 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.407 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.407 "name": "raid_bdev1", 00:16:58.407 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:58.407 "strip_size_kb": 64, 00:16:58.407 "state": "online", 00:16:58.407 "raid_level": "raid5f", 00:16:58.407 "superblock": false, 00:16:58.407 "num_base_bdevs": 4, 00:16:58.407 "num_base_bdevs_discovered": 4, 00:16:58.407 "num_base_bdevs_operational": 4, 00:16:58.407 "process": { 00:16:58.407 "type": "rebuild", 00:16:58.407 "target": "spare", 00:16:58.407 "progress": { 00:16:58.407 "blocks": 21120, 00:16:58.407 "percent": 10 00:16:58.407 } 00:16:58.407 }, 00:16:58.407 "base_bdevs_list": [ 00:16:58.407 { 00:16:58.407 "name": "spare", 00:16:58.407 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:16:58.407 "is_configured": true, 00:16:58.407 "data_offset": 0, 00:16:58.407 "data_size": 65536 00:16:58.407 }, 00:16:58.407 { 00:16:58.407 "name": "BaseBdev2", 00:16:58.407 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:58.407 "is_configured": true, 00:16:58.407 "data_offset": 0, 00:16:58.408 "data_size": 65536 00:16:58.408 }, 00:16:58.408 { 00:16:58.408 "name": "BaseBdev3", 00:16:58.408 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:58.408 "is_configured": true, 00:16:58.408 "data_offset": 0, 00:16:58.408 "data_size": 65536 00:16:58.408 }, 00:16:58.408 { 00:16:58.408 "name": "BaseBdev4", 00:16:58.408 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:58.408 "is_configured": true, 00:16:58.408 "data_offset": 0, 00:16:58.408 "data_size": 65536 00:16:58.408 } 00:16:58.408 ] 00:16:58.408 }' 00:16:58.408 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.408 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.408 16:18:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.408 16:18:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.408 16:18:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.788 "name": "raid_bdev1", 00:16:59.788 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:16:59.788 "strip_size_kb": 64, 00:16:59.788 "state": "online", 00:16:59.788 "raid_level": "raid5f", 00:16:59.788 "superblock": false, 00:16:59.788 "num_base_bdevs": 4, 00:16:59.788 "num_base_bdevs_discovered": 4, 00:16:59.788 "num_base_bdevs_operational": 4, 00:16:59.788 "process": { 00:16:59.788 "type": "rebuild", 00:16:59.788 "target": "spare", 00:16:59.788 "progress": { 00:16:59.788 "blocks": 42240, 00:16:59.788 "percent": 21 00:16:59.788 } 00:16:59.788 }, 00:16:59.788 "base_bdevs_list": [ 00:16:59.788 { 00:16:59.788 "name": "spare", 00:16:59.788 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:16:59.788 "is_configured": true, 00:16:59.788 "data_offset": 0, 00:16:59.788 "data_size": 65536 00:16:59.788 }, 00:16:59.788 { 00:16:59.788 "name": "BaseBdev2", 00:16:59.788 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:16:59.788 "is_configured": true, 00:16:59.788 "data_offset": 0, 00:16:59.788 "data_size": 65536 00:16:59.788 }, 00:16:59.788 { 00:16:59.788 "name": "BaseBdev3", 00:16:59.788 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:16:59.788 "is_configured": true, 00:16:59.788 "data_offset": 0, 00:16:59.788 "data_size": 65536 00:16:59.788 }, 00:16:59.788 { 00:16:59.788 "name": "BaseBdev4", 00:16:59.788 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:16:59.788 "is_configured": true, 00:16:59.788 "data_offset": 0, 00:16:59.788 "data_size": 65536 00:16:59.788 } 00:16:59.788 ] 00:16:59.788 }' 00:16:59.788 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.789 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.789 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.789 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.789 16:18:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.726 16:18:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.727 16:18:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.727 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.727 "name": "raid_bdev1", 00:17:00.727 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:00.727 "strip_size_kb": 64, 00:17:00.727 "state": "online", 00:17:00.727 "raid_level": "raid5f", 00:17:00.727 "superblock": false, 00:17:00.727 "num_base_bdevs": 4, 00:17:00.727 "num_base_bdevs_discovered": 4, 00:17:00.727 "num_base_bdevs_operational": 4, 00:17:00.727 "process": { 00:17:00.727 "type": "rebuild", 00:17:00.727 "target": "spare", 00:17:00.727 "progress": { 00:17:00.727 "blocks": 65280, 00:17:00.727 "percent": 33 00:17:00.727 } 00:17:00.727 }, 00:17:00.727 "base_bdevs_list": [ 00:17:00.727 { 00:17:00.727 "name": "spare", 00:17:00.727 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:00.727 "is_configured": true, 00:17:00.727 "data_offset": 0, 00:17:00.727 "data_size": 65536 00:17:00.727 }, 00:17:00.727 { 00:17:00.727 "name": "BaseBdev2", 00:17:00.727 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:00.727 "is_configured": true, 00:17:00.727 "data_offset": 0, 00:17:00.727 "data_size": 65536 00:17:00.727 }, 00:17:00.727 { 00:17:00.727 "name": "BaseBdev3", 00:17:00.727 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:00.727 "is_configured": true, 00:17:00.727 "data_offset": 0, 00:17:00.727 "data_size": 65536 00:17:00.727 }, 00:17:00.727 { 00:17:00.727 "name": "BaseBdev4", 00:17:00.727 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:00.727 "is_configured": true, 00:17:00.727 "data_offset": 0, 00:17:00.727 "data_size": 65536 00:17:00.727 } 00:17:00.727 ] 00:17:00.727 }' 00:17:00.727 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.727 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.727 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.727 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.727 16:18:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.105 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.105 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.105 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.105 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.105 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.105 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.106 "name": "raid_bdev1", 00:17:02.106 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:02.106 "strip_size_kb": 64, 00:17:02.106 "state": "online", 00:17:02.106 "raid_level": "raid5f", 00:17:02.106 "superblock": false, 00:17:02.106 "num_base_bdevs": 4, 00:17:02.106 "num_base_bdevs_discovered": 4, 00:17:02.106 "num_base_bdevs_operational": 4, 00:17:02.106 "process": { 00:17:02.106 "type": "rebuild", 00:17:02.106 "target": "spare", 00:17:02.106 "progress": { 00:17:02.106 "blocks": 86400, 00:17:02.106 "percent": 43 00:17:02.106 } 00:17:02.106 }, 00:17:02.106 "base_bdevs_list": [ 00:17:02.106 { 00:17:02.106 "name": "spare", 00:17:02.106 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:02.106 "is_configured": true, 00:17:02.106 "data_offset": 0, 00:17:02.106 "data_size": 65536 00:17:02.106 }, 00:17:02.106 { 00:17:02.106 "name": "BaseBdev2", 00:17:02.106 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:02.106 "is_configured": true, 00:17:02.106 "data_offset": 0, 00:17:02.106 "data_size": 65536 00:17:02.106 }, 00:17:02.106 { 00:17:02.106 "name": "BaseBdev3", 00:17:02.106 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:02.106 "is_configured": true, 00:17:02.106 "data_offset": 0, 00:17:02.106 "data_size": 65536 00:17:02.106 }, 00:17:02.106 { 00:17:02.106 "name": "BaseBdev4", 00:17:02.106 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:02.106 "is_configured": true, 00:17:02.106 "data_offset": 0, 00:17:02.106 "data_size": 65536 00:17:02.106 } 00:17:02.106 ] 00:17:02.106 }' 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.106 16:18:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.043 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.043 "name": "raid_bdev1", 00:17:03.043 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:03.043 "strip_size_kb": 64, 00:17:03.043 "state": "online", 00:17:03.043 "raid_level": "raid5f", 00:17:03.043 "superblock": false, 00:17:03.043 "num_base_bdevs": 4, 00:17:03.043 "num_base_bdevs_discovered": 4, 00:17:03.043 "num_base_bdevs_operational": 4, 00:17:03.043 "process": { 00:17:03.043 "type": "rebuild", 00:17:03.043 "target": "spare", 00:17:03.043 "progress": { 00:17:03.043 "blocks": 107520, 00:17:03.043 "percent": 54 00:17:03.043 } 00:17:03.043 }, 00:17:03.043 "base_bdevs_list": [ 00:17:03.043 { 00:17:03.043 "name": "spare", 00:17:03.043 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:03.043 "is_configured": true, 00:17:03.043 "data_offset": 0, 00:17:03.043 "data_size": 65536 00:17:03.043 }, 00:17:03.043 { 00:17:03.043 "name": "BaseBdev2", 00:17:03.043 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:03.043 "is_configured": true, 00:17:03.043 "data_offset": 0, 00:17:03.043 "data_size": 65536 00:17:03.043 }, 00:17:03.043 { 00:17:03.043 "name": "BaseBdev3", 00:17:03.044 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:03.044 "is_configured": true, 00:17:03.044 "data_offset": 0, 00:17:03.044 "data_size": 65536 00:17:03.044 }, 00:17:03.044 { 00:17:03.044 "name": "BaseBdev4", 00:17:03.044 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:03.044 "is_configured": true, 00:17:03.044 "data_offset": 0, 00:17:03.044 "data_size": 65536 00:17:03.044 } 00:17:03.044 ] 00:17:03.044 }' 00:17:03.044 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.044 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.044 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.044 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.044 16:18:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.981 "name": "raid_bdev1", 00:17:03.981 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:03.981 "strip_size_kb": 64, 00:17:03.981 "state": "online", 00:17:03.981 "raid_level": "raid5f", 00:17:03.981 "superblock": false, 00:17:03.981 "num_base_bdevs": 4, 00:17:03.981 "num_base_bdevs_discovered": 4, 00:17:03.981 "num_base_bdevs_operational": 4, 00:17:03.981 "process": { 00:17:03.981 "type": "rebuild", 00:17:03.981 "target": "spare", 00:17:03.981 "progress": { 00:17:03.981 "blocks": 130560, 00:17:03.981 "percent": 66 00:17:03.981 } 00:17:03.981 }, 00:17:03.981 "base_bdevs_list": [ 00:17:03.981 { 00:17:03.981 "name": "spare", 00:17:03.981 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:03.981 "is_configured": true, 00:17:03.981 "data_offset": 0, 00:17:03.981 "data_size": 65536 00:17:03.981 }, 00:17:03.981 { 00:17:03.981 "name": "BaseBdev2", 00:17:03.981 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:03.981 "is_configured": true, 00:17:03.981 "data_offset": 0, 00:17:03.981 "data_size": 65536 00:17:03.981 }, 00:17:03.981 { 00:17:03.981 "name": "BaseBdev3", 00:17:03.981 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:03.981 "is_configured": true, 00:17:03.981 "data_offset": 0, 00:17:03.981 "data_size": 65536 00:17:03.981 }, 00:17:03.981 { 00:17:03.981 "name": "BaseBdev4", 00:17:03.981 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:03.981 "is_configured": true, 00:17:03.981 "data_offset": 0, 00:17:03.981 "data_size": 65536 00:17:03.981 } 00:17:03.981 ] 00:17:03.981 }' 00:17:03.981 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.241 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.241 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.241 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.241 16:18:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.178 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.178 "name": "raid_bdev1", 00:17:05.178 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:05.178 "strip_size_kb": 64, 00:17:05.178 "state": "online", 00:17:05.178 "raid_level": "raid5f", 00:17:05.178 "superblock": false, 00:17:05.178 "num_base_bdevs": 4, 00:17:05.178 "num_base_bdevs_discovered": 4, 00:17:05.178 "num_base_bdevs_operational": 4, 00:17:05.178 "process": { 00:17:05.178 "type": "rebuild", 00:17:05.178 "target": "spare", 00:17:05.178 "progress": { 00:17:05.178 "blocks": 151680, 00:17:05.179 "percent": 77 00:17:05.179 } 00:17:05.179 }, 00:17:05.179 "base_bdevs_list": [ 00:17:05.179 { 00:17:05.179 "name": "spare", 00:17:05.179 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:05.179 "is_configured": true, 00:17:05.179 "data_offset": 0, 00:17:05.179 "data_size": 65536 00:17:05.179 }, 00:17:05.179 { 00:17:05.179 "name": "BaseBdev2", 00:17:05.179 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:05.179 "is_configured": true, 00:17:05.179 "data_offset": 0, 00:17:05.179 "data_size": 65536 00:17:05.179 }, 00:17:05.179 { 00:17:05.179 "name": "BaseBdev3", 00:17:05.179 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:05.179 "is_configured": true, 00:17:05.179 "data_offset": 0, 00:17:05.179 "data_size": 65536 00:17:05.179 }, 00:17:05.179 { 00:17:05.179 "name": "BaseBdev4", 00:17:05.179 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:05.179 "is_configured": true, 00:17:05.179 "data_offset": 0, 00:17:05.179 "data_size": 65536 00:17:05.179 } 00:17:05.179 ] 00:17:05.179 }' 00:17:05.179 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.179 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.179 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.179 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.179 16:18:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.558 "name": "raid_bdev1", 00:17:06.558 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:06.558 "strip_size_kb": 64, 00:17:06.558 "state": "online", 00:17:06.558 "raid_level": "raid5f", 00:17:06.558 "superblock": false, 00:17:06.558 "num_base_bdevs": 4, 00:17:06.558 "num_base_bdevs_discovered": 4, 00:17:06.558 "num_base_bdevs_operational": 4, 00:17:06.558 "process": { 00:17:06.558 "type": "rebuild", 00:17:06.558 "target": "spare", 00:17:06.558 "progress": { 00:17:06.558 "blocks": 172800, 00:17:06.558 "percent": 87 00:17:06.558 } 00:17:06.558 }, 00:17:06.558 "base_bdevs_list": [ 00:17:06.558 { 00:17:06.558 "name": "spare", 00:17:06.558 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:06.558 "is_configured": true, 00:17:06.558 "data_offset": 0, 00:17:06.558 "data_size": 65536 00:17:06.558 }, 00:17:06.558 { 00:17:06.558 "name": "BaseBdev2", 00:17:06.558 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:06.558 "is_configured": true, 00:17:06.558 "data_offset": 0, 00:17:06.558 "data_size": 65536 00:17:06.558 }, 00:17:06.558 { 00:17:06.558 "name": "BaseBdev3", 00:17:06.558 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:06.558 "is_configured": true, 00:17:06.558 "data_offset": 0, 00:17:06.558 "data_size": 65536 00:17:06.558 }, 00:17:06.558 { 00:17:06.558 "name": "BaseBdev4", 00:17:06.558 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:06.558 "is_configured": true, 00:17:06.558 "data_offset": 0, 00:17:06.558 "data_size": 65536 00:17:06.558 } 00:17:06.558 ] 00:17:06.558 }' 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.558 16:18:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.558 16:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.558 16:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.497 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.497 "name": "raid_bdev1", 00:17:07.497 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:07.497 "strip_size_kb": 64, 00:17:07.497 "state": "online", 00:17:07.497 "raid_level": "raid5f", 00:17:07.497 "superblock": false, 00:17:07.497 "num_base_bdevs": 4, 00:17:07.497 "num_base_bdevs_discovered": 4, 00:17:07.497 "num_base_bdevs_operational": 4, 00:17:07.497 "process": { 00:17:07.497 "type": "rebuild", 00:17:07.497 "target": "spare", 00:17:07.497 "progress": { 00:17:07.497 "blocks": 195840, 00:17:07.497 "percent": 99 00:17:07.497 } 00:17:07.497 }, 00:17:07.497 "base_bdevs_list": [ 00:17:07.497 { 00:17:07.497 "name": "spare", 00:17:07.497 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:07.497 "is_configured": true, 00:17:07.497 "data_offset": 0, 00:17:07.497 "data_size": 65536 00:17:07.497 }, 00:17:07.497 { 00:17:07.497 "name": "BaseBdev2", 00:17:07.497 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:07.497 "is_configured": true, 00:17:07.497 "data_offset": 0, 00:17:07.497 "data_size": 65536 00:17:07.497 }, 00:17:07.497 { 00:17:07.497 "name": "BaseBdev3", 00:17:07.497 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:07.497 "is_configured": true, 00:17:07.497 "data_offset": 0, 00:17:07.497 "data_size": 65536 00:17:07.497 }, 00:17:07.497 { 00:17:07.497 "name": "BaseBdev4", 00:17:07.497 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:07.497 "is_configured": true, 00:17:07.497 "data_offset": 0, 00:17:07.497 "data_size": 65536 00:17:07.497 } 00:17:07.498 ] 00:17:07.498 }' 00:17:07.498 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.498 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.498 [2024-09-28 16:18:22.114602] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:07.498 [2024-09-28 16:18:22.114725] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:07.498 [2024-09-28 16:18:22.114784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.498 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.498 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.498 16:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.878 "name": "raid_bdev1", 00:17:08.878 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:08.878 "strip_size_kb": 64, 00:17:08.878 "state": "online", 00:17:08.878 "raid_level": "raid5f", 00:17:08.878 "superblock": false, 00:17:08.878 "num_base_bdevs": 4, 00:17:08.878 "num_base_bdevs_discovered": 4, 00:17:08.878 "num_base_bdevs_operational": 4, 00:17:08.878 "base_bdevs_list": [ 00:17:08.878 { 00:17:08.878 "name": "spare", 00:17:08.878 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 }, 00:17:08.878 { 00:17:08.878 "name": "BaseBdev2", 00:17:08.878 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 }, 00:17:08.878 { 00:17:08.878 "name": "BaseBdev3", 00:17:08.878 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 }, 00:17:08.878 { 00:17:08.878 "name": "BaseBdev4", 00:17:08.878 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 } 00:17:08.878 ] 00:17:08.878 }' 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.878 "name": "raid_bdev1", 00:17:08.878 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:08.878 "strip_size_kb": 64, 00:17:08.878 "state": "online", 00:17:08.878 "raid_level": "raid5f", 00:17:08.878 "superblock": false, 00:17:08.878 "num_base_bdevs": 4, 00:17:08.878 "num_base_bdevs_discovered": 4, 00:17:08.878 "num_base_bdevs_operational": 4, 00:17:08.878 "base_bdevs_list": [ 00:17:08.878 { 00:17:08.878 "name": "spare", 00:17:08.878 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 }, 00:17:08.878 { 00:17:08.878 "name": "BaseBdev2", 00:17:08.878 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 }, 00:17:08.878 { 00:17:08.878 "name": "BaseBdev3", 00:17:08.878 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 }, 00:17:08.878 { 00:17:08.878 "name": "BaseBdev4", 00:17:08.878 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:08.878 "is_configured": true, 00:17:08.878 "data_offset": 0, 00:17:08.878 "data_size": 65536 00:17:08.878 } 00:17:08.878 ] 00:17:08.878 }' 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.878 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.879 "name": "raid_bdev1", 00:17:08.879 "uuid": "942db35a-0677-4a86-88d9-981425362d30", 00:17:08.879 "strip_size_kb": 64, 00:17:08.879 "state": "online", 00:17:08.879 "raid_level": "raid5f", 00:17:08.879 "superblock": false, 00:17:08.879 "num_base_bdevs": 4, 00:17:08.879 "num_base_bdevs_discovered": 4, 00:17:08.879 "num_base_bdevs_operational": 4, 00:17:08.879 "base_bdevs_list": [ 00:17:08.879 { 00:17:08.879 "name": "spare", 00:17:08.879 "uuid": "f2a4021d-0b0e-5636-b822-63eaf2334a7a", 00:17:08.879 "is_configured": true, 00:17:08.879 "data_offset": 0, 00:17:08.879 "data_size": 65536 00:17:08.879 }, 00:17:08.879 { 00:17:08.879 "name": "BaseBdev2", 00:17:08.879 "uuid": "7bde7b9f-a076-5996-b803-c57cf8dbf8ca", 00:17:08.879 "is_configured": true, 00:17:08.879 "data_offset": 0, 00:17:08.879 "data_size": 65536 00:17:08.879 }, 00:17:08.879 { 00:17:08.879 "name": "BaseBdev3", 00:17:08.879 "uuid": "60c898b5-74b0-55ff-b70f-a0c22c5a8a7b", 00:17:08.879 "is_configured": true, 00:17:08.879 "data_offset": 0, 00:17:08.879 "data_size": 65536 00:17:08.879 }, 00:17:08.879 { 00:17:08.879 "name": "BaseBdev4", 00:17:08.879 "uuid": "a976a7b4-a4ac-55ed-9b87-3bd01c063149", 00:17:08.879 "is_configured": true, 00:17:08.879 "data_offset": 0, 00:17:08.879 "data_size": 65536 00:17:08.879 } 00:17:08.879 ] 00:17:08.879 }' 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.879 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.448 [2024-09-28 16:18:23.863321] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.448 [2024-09-28 16:18:23.863399] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.448 [2024-09-28 16:18:23.863492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.448 [2024-09-28 16:18:23.863585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.448 [2024-09-28 16:18:23.863638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:09.448 16:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:09.448 /dev/nbd0 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.708 1+0 records in 00:17:09.708 1+0 records out 00:17:09.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376374 s, 10.9 MB/s 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:09.708 /dev/nbd1 00:17:09.708 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.968 1+0 records in 00:17:09.968 1+0 records out 00:17:09.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041138 s, 10.0 MB/s 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.968 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.236 16:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84600 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84600 ']' 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84600 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84600 00:17:10.510 killing process with pid 84600 00:17:10.510 Received shutdown signal, test time was about 60.000000 seconds 00:17:10.510 00:17:10.510 Latency(us) 00:17:10.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.510 =================================================================================================================== 00:17:10.510 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84600' 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84600 00:17:10.510 [2024-09-28 16:18:25.084566] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.510 16:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84600 00:17:11.123 [2024-09-28 16:18:25.540847] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.076 16:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:12.076 00:17:12.076 real 0m19.948s 00:17:12.076 user 0m23.653s 00:17:12.076 sys 0m2.341s 00:17:12.076 16:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.076 16:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.076 ************************************ 00:17:12.076 END TEST raid5f_rebuild_test 00:17:12.076 ************************************ 00:17:12.336 16:18:26 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:12.336 16:18:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:12.336 16:18:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.336 16:18:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.337 ************************************ 00:17:12.337 START TEST raid5f_rebuild_test_sb 00:17:12.337 ************************************ 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85122 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85122 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85122 ']' 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.337 16:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.337 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:12.337 Zero copy mechanism will not be used. 00:17:12.337 [2024-09-28 16:18:26.878107] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:12.337 [2024-09-28 16:18:26.878210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85122 ] 00:17:12.597 [2024-09-28 16:18:27.040070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.597 [2024-09-28 16:18:27.236962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.857 [2024-09-28 16:18:27.430089] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.857 [2024-09-28 16:18:27.430129] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.117 BaseBdev1_malloc 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.117 [2024-09-28 16:18:27.730559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:13.117 [2024-09-28 16:18:27.730628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.117 [2024-09-28 16:18:27.730651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.117 [2024-09-28 16:18:27.730664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.117 [2024-09-28 16:18:27.732525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.117 [2024-09-28 16:18:27.732560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.117 BaseBdev1 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.117 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 BaseBdev2_malloc 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 [2024-09-28 16:18:27.816339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:13.378 [2024-09-28 16:18:27.816392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.378 [2024-09-28 16:18:27.816412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.378 [2024-09-28 16:18:27.816423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.378 [2024-09-28 16:18:27.818295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.378 [2024-09-28 16:18:27.818327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:13.378 BaseBdev2 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 BaseBdev3_malloc 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 [2024-09-28 16:18:27.868700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:13.378 [2024-09-28 16:18:27.868746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.378 [2024-09-28 16:18:27.868766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.378 [2024-09-28 16:18:27.868776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.378 [2024-09-28 16:18:27.870587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.378 [2024-09-28 16:18:27.870619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:13.378 BaseBdev3 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 BaseBdev4_malloc 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 [2024-09-28 16:18:27.922864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:13.378 [2024-09-28 16:18:27.922911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.378 [2024-09-28 16:18:27.922930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:13.378 [2024-09-28 16:18:27.922939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.378 [2024-09-28 16:18:27.925048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.378 [2024-09-28 16:18:27.925086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:13.378 BaseBdev4 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 spare_malloc 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 spare_delay 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 [2024-09-28 16:18:27.988309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.378 [2024-09-28 16:18:27.988359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.379 [2024-09-28 16:18:27.988378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:13.379 [2024-09-28 16:18:27.988388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.379 [2024-09-28 16:18:27.990276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.379 [2024-09-28 16:18:27.990306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.379 spare 00:17:13.379 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.379 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:13.379 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.379 16:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.379 [2024-09-28 16:18:28.000362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.379 [2024-09-28 16:18:28.001930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.379 [2024-09-28 16:18:28.001988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.379 [2024-09-28 16:18:28.002033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:13.379 [2024-09-28 16:18:28.002207] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:13.379 [2024-09-28 16:18:28.002234] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:13.379 [2024-09-28 16:18:28.002460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:13.379 [2024-09-28 16:18:28.008570] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:13.379 [2024-09-28 16:18:28.008590] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:13.379 [2024-09-28 16:18:28.008758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.379 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.639 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.639 "name": "raid_bdev1", 00:17:13.639 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:13.639 "strip_size_kb": 64, 00:17:13.639 "state": "online", 00:17:13.639 "raid_level": "raid5f", 00:17:13.639 "superblock": true, 00:17:13.639 "num_base_bdevs": 4, 00:17:13.639 "num_base_bdevs_discovered": 4, 00:17:13.639 "num_base_bdevs_operational": 4, 00:17:13.639 "base_bdevs_list": [ 00:17:13.639 { 00:17:13.639 "name": "BaseBdev1", 00:17:13.639 "uuid": "39e86a78-095a-500a-8db3-cebd25c75a9c", 00:17:13.639 "is_configured": true, 00:17:13.639 "data_offset": 2048, 00:17:13.639 "data_size": 63488 00:17:13.639 }, 00:17:13.639 { 00:17:13.639 "name": "BaseBdev2", 00:17:13.639 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:13.639 "is_configured": true, 00:17:13.639 "data_offset": 2048, 00:17:13.639 "data_size": 63488 00:17:13.639 }, 00:17:13.639 { 00:17:13.639 "name": "BaseBdev3", 00:17:13.639 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:13.639 "is_configured": true, 00:17:13.639 "data_offset": 2048, 00:17:13.639 "data_size": 63488 00:17:13.639 }, 00:17:13.639 { 00:17:13.639 "name": "BaseBdev4", 00:17:13.639 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:13.639 "is_configured": true, 00:17:13.639 "data_offset": 2048, 00:17:13.639 "data_size": 63488 00:17:13.639 } 00:17:13.639 ] 00:17:13.639 }' 00:17:13.639 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.639 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.898 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.899 [2024-09-28 16:18:28.511483] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.899 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:14.159 [2024-09-28 16:18:28.783304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:14.159 /dev/nbd0 00:17:14.159 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.419 1+0 records in 00:17:14.419 1+0 records out 00:17:14.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426712 s, 9.6 MB/s 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:14.419 16:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:14.679 496+0 records in 00:17:14.679 496+0 records out 00:17:14.679 97517568 bytes (98 MB, 93 MiB) copied, 0.444877 s, 219 MB/s 00:17:14.679 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:14.679 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.679 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:14.679 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:14.679 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:14.679 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.679 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:14.939 [2024-09-28 16:18:29.530021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.939 [2024-09-28 16:18:29.546546] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.939 "name": "raid_bdev1", 00:17:14.939 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:14.939 "strip_size_kb": 64, 00:17:14.939 "state": "online", 00:17:14.939 "raid_level": "raid5f", 00:17:14.939 "superblock": true, 00:17:14.939 "num_base_bdevs": 4, 00:17:14.939 "num_base_bdevs_discovered": 3, 00:17:14.939 "num_base_bdevs_operational": 3, 00:17:14.939 "base_bdevs_list": [ 00:17:14.939 { 00:17:14.939 "name": null, 00:17:14.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.939 "is_configured": false, 00:17:14.939 "data_offset": 0, 00:17:14.939 "data_size": 63488 00:17:14.939 }, 00:17:14.939 { 00:17:14.939 "name": "BaseBdev2", 00:17:14.939 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:14.939 "is_configured": true, 00:17:14.939 "data_offset": 2048, 00:17:14.939 "data_size": 63488 00:17:14.939 }, 00:17:14.939 { 00:17:14.939 "name": "BaseBdev3", 00:17:14.939 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:14.939 "is_configured": true, 00:17:14.939 "data_offset": 2048, 00:17:14.939 "data_size": 63488 00:17:14.939 }, 00:17:14.939 { 00:17:14.939 "name": "BaseBdev4", 00:17:14.939 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:14.939 "is_configured": true, 00:17:14.939 "data_offset": 2048, 00:17:14.939 "data_size": 63488 00:17:14.939 } 00:17:14.939 ] 00:17:14.939 }' 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.939 16:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.510 16:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.510 16:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.510 16:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.510 [2024-09-28 16:18:30.037679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.510 [2024-09-28 16:18:30.052540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:15.510 16:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.510 16:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:15.510 [2024-09-28 16:18:30.062025] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.450 "name": "raid_bdev1", 00:17:16.450 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:16.450 "strip_size_kb": 64, 00:17:16.450 "state": "online", 00:17:16.450 "raid_level": "raid5f", 00:17:16.450 "superblock": true, 00:17:16.450 "num_base_bdevs": 4, 00:17:16.450 "num_base_bdevs_discovered": 4, 00:17:16.450 "num_base_bdevs_operational": 4, 00:17:16.450 "process": { 00:17:16.450 "type": "rebuild", 00:17:16.450 "target": "spare", 00:17:16.450 "progress": { 00:17:16.450 "blocks": 19200, 00:17:16.450 "percent": 10 00:17:16.450 } 00:17:16.450 }, 00:17:16.450 "base_bdevs_list": [ 00:17:16.450 { 00:17:16.450 "name": "spare", 00:17:16.450 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:16.450 "is_configured": true, 00:17:16.450 "data_offset": 2048, 00:17:16.450 "data_size": 63488 00:17:16.450 }, 00:17:16.450 { 00:17:16.450 "name": "BaseBdev2", 00:17:16.450 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:16.450 "is_configured": true, 00:17:16.450 "data_offset": 2048, 00:17:16.450 "data_size": 63488 00:17:16.450 }, 00:17:16.450 { 00:17:16.450 "name": "BaseBdev3", 00:17:16.450 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:16.450 "is_configured": true, 00:17:16.450 "data_offset": 2048, 00:17:16.450 "data_size": 63488 00:17:16.450 }, 00:17:16.450 { 00:17:16.450 "name": "BaseBdev4", 00:17:16.450 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:16.450 "is_configured": true, 00:17:16.450 "data_offset": 2048, 00:17:16.450 "data_size": 63488 00:17:16.450 } 00:17:16.450 ] 00:17:16.450 }' 00:17:16.450 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.710 [2024-09-28 16:18:31.193110] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.710 [2024-09-28 16:18:31.269129] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.710 [2024-09-28 16:18:31.269193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.710 [2024-09-28 16:18:31.269210] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.710 [2024-09-28 16:18:31.269221] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.710 "name": "raid_bdev1", 00:17:16.710 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:16.710 "strip_size_kb": 64, 00:17:16.710 "state": "online", 00:17:16.710 "raid_level": "raid5f", 00:17:16.710 "superblock": true, 00:17:16.710 "num_base_bdevs": 4, 00:17:16.710 "num_base_bdevs_discovered": 3, 00:17:16.710 "num_base_bdevs_operational": 3, 00:17:16.710 "base_bdevs_list": [ 00:17:16.710 { 00:17:16.710 "name": null, 00:17:16.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.710 "is_configured": false, 00:17:16.710 "data_offset": 0, 00:17:16.710 "data_size": 63488 00:17:16.710 }, 00:17:16.710 { 00:17:16.710 "name": "BaseBdev2", 00:17:16.710 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:16.710 "is_configured": true, 00:17:16.710 "data_offset": 2048, 00:17:16.710 "data_size": 63488 00:17:16.710 }, 00:17:16.710 { 00:17:16.710 "name": "BaseBdev3", 00:17:16.710 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:16.710 "is_configured": true, 00:17:16.710 "data_offset": 2048, 00:17:16.710 "data_size": 63488 00:17:16.710 }, 00:17:16.710 { 00:17:16.710 "name": "BaseBdev4", 00:17:16.710 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:16.710 "is_configured": true, 00:17:16.710 "data_offset": 2048, 00:17:16.710 "data_size": 63488 00:17:16.710 } 00:17:16.710 ] 00:17:16.710 }' 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.710 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.280 "name": "raid_bdev1", 00:17:17.280 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:17.280 "strip_size_kb": 64, 00:17:17.280 "state": "online", 00:17:17.280 "raid_level": "raid5f", 00:17:17.280 "superblock": true, 00:17:17.280 "num_base_bdevs": 4, 00:17:17.280 "num_base_bdevs_discovered": 3, 00:17:17.280 "num_base_bdevs_operational": 3, 00:17:17.280 "base_bdevs_list": [ 00:17:17.280 { 00:17:17.280 "name": null, 00:17:17.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.280 "is_configured": false, 00:17:17.280 "data_offset": 0, 00:17:17.280 "data_size": 63488 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "name": "BaseBdev2", 00:17:17.280 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:17.280 "is_configured": true, 00:17:17.280 "data_offset": 2048, 00:17:17.280 "data_size": 63488 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "name": "BaseBdev3", 00:17:17.280 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:17.280 "is_configured": true, 00:17:17.280 "data_offset": 2048, 00:17:17.280 "data_size": 63488 00:17:17.280 }, 00:17:17.280 { 00:17:17.280 "name": "BaseBdev4", 00:17:17.280 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:17.280 "is_configured": true, 00:17:17.280 "data_offset": 2048, 00:17:17.280 "data_size": 63488 00:17:17.280 } 00:17:17.280 ] 00:17:17.280 }' 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 [2024-09-28 16:18:31.883456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.280 [2024-09-28 16:18:31.897671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.280 16:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:17.280 [2024-09-28 16:18:31.907029] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.220 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.220 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.220 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.220 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.220 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.480 "name": "raid_bdev1", 00:17:18.480 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:18.480 "strip_size_kb": 64, 00:17:18.480 "state": "online", 00:17:18.480 "raid_level": "raid5f", 00:17:18.480 "superblock": true, 00:17:18.480 "num_base_bdevs": 4, 00:17:18.480 "num_base_bdevs_discovered": 4, 00:17:18.480 "num_base_bdevs_operational": 4, 00:17:18.480 "process": { 00:17:18.480 "type": "rebuild", 00:17:18.480 "target": "spare", 00:17:18.480 "progress": { 00:17:18.480 "blocks": 19200, 00:17:18.480 "percent": 10 00:17:18.480 } 00:17:18.480 }, 00:17:18.480 "base_bdevs_list": [ 00:17:18.480 { 00:17:18.480 "name": "spare", 00:17:18.480 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.480 "name": "BaseBdev2", 00:17:18.480 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.480 "name": "BaseBdev3", 00:17:18.480 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.480 "name": "BaseBdev4", 00:17:18.480 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 } 00:17:18.480 ] 00:17:18.480 }' 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.480 16:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:18.480 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=646 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.480 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.480 "name": "raid_bdev1", 00:17:18.480 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:18.480 "strip_size_kb": 64, 00:17:18.480 "state": "online", 00:17:18.480 "raid_level": "raid5f", 00:17:18.480 "superblock": true, 00:17:18.480 "num_base_bdevs": 4, 00:17:18.480 "num_base_bdevs_discovered": 4, 00:17:18.480 "num_base_bdevs_operational": 4, 00:17:18.480 "process": { 00:17:18.480 "type": "rebuild", 00:17:18.480 "target": "spare", 00:17:18.480 "progress": { 00:17:18.480 "blocks": 21120, 00:17:18.480 "percent": 11 00:17:18.480 } 00:17:18.480 }, 00:17:18.480 "base_bdevs_list": [ 00:17:18.480 { 00:17:18.480 "name": "spare", 00:17:18.480 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.480 "name": "BaseBdev2", 00:17:18.480 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.481 "name": "BaseBdev3", 00:17:18.481 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:18.481 "is_configured": true, 00:17:18.481 "data_offset": 2048, 00:17:18.481 "data_size": 63488 00:17:18.481 }, 00:17:18.481 { 00:17:18.481 "name": "BaseBdev4", 00:17:18.481 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:18.481 "is_configured": true, 00:17:18.481 "data_offset": 2048, 00:17:18.481 "data_size": 63488 00:17:18.481 } 00:17:18.481 ] 00:17:18.481 }' 00:17:18.481 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.481 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.481 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.740 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.740 16:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.679 "name": "raid_bdev1", 00:17:19.679 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:19.679 "strip_size_kb": 64, 00:17:19.679 "state": "online", 00:17:19.679 "raid_level": "raid5f", 00:17:19.679 "superblock": true, 00:17:19.679 "num_base_bdevs": 4, 00:17:19.679 "num_base_bdevs_discovered": 4, 00:17:19.679 "num_base_bdevs_operational": 4, 00:17:19.679 "process": { 00:17:19.679 "type": "rebuild", 00:17:19.679 "target": "spare", 00:17:19.679 "progress": { 00:17:19.679 "blocks": 42240, 00:17:19.679 "percent": 22 00:17:19.679 } 00:17:19.679 }, 00:17:19.679 "base_bdevs_list": [ 00:17:19.679 { 00:17:19.679 "name": "spare", 00:17:19.679 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 }, 00:17:19.679 { 00:17:19.679 "name": "BaseBdev2", 00:17:19.679 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 }, 00:17:19.679 { 00:17:19.679 "name": "BaseBdev3", 00:17:19.679 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 }, 00:17:19.679 { 00:17:19.679 "name": "BaseBdev4", 00:17:19.679 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 } 00:17:19.679 ] 00:17:19.679 }' 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.679 16:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.061 "name": "raid_bdev1", 00:17:21.061 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:21.061 "strip_size_kb": 64, 00:17:21.061 "state": "online", 00:17:21.061 "raid_level": "raid5f", 00:17:21.061 "superblock": true, 00:17:21.061 "num_base_bdevs": 4, 00:17:21.061 "num_base_bdevs_discovered": 4, 00:17:21.061 "num_base_bdevs_operational": 4, 00:17:21.061 "process": { 00:17:21.061 "type": "rebuild", 00:17:21.061 "target": "spare", 00:17:21.061 "progress": { 00:17:21.061 "blocks": 65280, 00:17:21.061 "percent": 34 00:17:21.061 } 00:17:21.061 }, 00:17:21.061 "base_bdevs_list": [ 00:17:21.061 { 00:17:21.061 "name": "spare", 00:17:21.061 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:21.061 "is_configured": true, 00:17:21.061 "data_offset": 2048, 00:17:21.061 "data_size": 63488 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev2", 00:17:21.061 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:21.061 "is_configured": true, 00:17:21.061 "data_offset": 2048, 00:17:21.061 "data_size": 63488 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev3", 00:17:21.061 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:21.061 "is_configured": true, 00:17:21.061 "data_offset": 2048, 00:17:21.061 "data_size": 63488 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev4", 00:17:21.061 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:21.061 "is_configured": true, 00:17:21.061 "data_offset": 2048, 00:17:21.061 "data_size": 63488 00:17:21.061 } 00:17:21.061 ] 00:17:21.061 }' 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.061 16:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.001 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.001 "name": "raid_bdev1", 00:17:22.001 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:22.001 "strip_size_kb": 64, 00:17:22.001 "state": "online", 00:17:22.001 "raid_level": "raid5f", 00:17:22.001 "superblock": true, 00:17:22.001 "num_base_bdevs": 4, 00:17:22.001 "num_base_bdevs_discovered": 4, 00:17:22.001 "num_base_bdevs_operational": 4, 00:17:22.001 "process": { 00:17:22.002 "type": "rebuild", 00:17:22.002 "target": "spare", 00:17:22.002 "progress": { 00:17:22.002 "blocks": 86400, 00:17:22.002 "percent": 45 00:17:22.002 } 00:17:22.002 }, 00:17:22.002 "base_bdevs_list": [ 00:17:22.002 { 00:17:22.002 "name": "spare", 00:17:22.002 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:22.002 "is_configured": true, 00:17:22.002 "data_offset": 2048, 00:17:22.002 "data_size": 63488 00:17:22.002 }, 00:17:22.002 { 00:17:22.002 "name": "BaseBdev2", 00:17:22.002 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:22.002 "is_configured": true, 00:17:22.002 "data_offset": 2048, 00:17:22.002 "data_size": 63488 00:17:22.002 }, 00:17:22.002 { 00:17:22.002 "name": "BaseBdev3", 00:17:22.002 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:22.002 "is_configured": true, 00:17:22.002 "data_offset": 2048, 00:17:22.002 "data_size": 63488 00:17:22.002 }, 00:17:22.002 { 00:17:22.002 "name": "BaseBdev4", 00:17:22.002 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:22.002 "is_configured": true, 00:17:22.002 "data_offset": 2048, 00:17:22.002 "data_size": 63488 00:17:22.002 } 00:17:22.002 ] 00:17:22.002 }' 00:17:22.002 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.002 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.002 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.002 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.002 16:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.384 "name": "raid_bdev1", 00:17:23.384 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:23.384 "strip_size_kb": 64, 00:17:23.384 "state": "online", 00:17:23.384 "raid_level": "raid5f", 00:17:23.384 "superblock": true, 00:17:23.384 "num_base_bdevs": 4, 00:17:23.384 "num_base_bdevs_discovered": 4, 00:17:23.384 "num_base_bdevs_operational": 4, 00:17:23.384 "process": { 00:17:23.384 "type": "rebuild", 00:17:23.384 "target": "spare", 00:17:23.384 "progress": { 00:17:23.384 "blocks": 109440, 00:17:23.384 "percent": 57 00:17:23.384 } 00:17:23.384 }, 00:17:23.384 "base_bdevs_list": [ 00:17:23.384 { 00:17:23.384 "name": "spare", 00:17:23.384 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:23.384 "is_configured": true, 00:17:23.384 "data_offset": 2048, 00:17:23.384 "data_size": 63488 00:17:23.384 }, 00:17:23.384 { 00:17:23.384 "name": "BaseBdev2", 00:17:23.384 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:23.384 "is_configured": true, 00:17:23.384 "data_offset": 2048, 00:17:23.384 "data_size": 63488 00:17:23.384 }, 00:17:23.384 { 00:17:23.384 "name": "BaseBdev3", 00:17:23.384 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:23.384 "is_configured": true, 00:17:23.384 "data_offset": 2048, 00:17:23.384 "data_size": 63488 00:17:23.384 }, 00:17:23.384 { 00:17:23.384 "name": "BaseBdev4", 00:17:23.384 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:23.384 "is_configured": true, 00:17:23.384 "data_offset": 2048, 00:17:23.384 "data_size": 63488 00:17:23.384 } 00:17:23.384 ] 00:17:23.384 }' 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.384 16:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.324 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.325 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.325 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.325 "name": "raid_bdev1", 00:17:24.325 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:24.325 "strip_size_kb": 64, 00:17:24.325 "state": "online", 00:17:24.325 "raid_level": "raid5f", 00:17:24.325 "superblock": true, 00:17:24.325 "num_base_bdevs": 4, 00:17:24.325 "num_base_bdevs_discovered": 4, 00:17:24.325 "num_base_bdevs_operational": 4, 00:17:24.325 "process": { 00:17:24.325 "type": "rebuild", 00:17:24.325 "target": "spare", 00:17:24.325 "progress": { 00:17:24.325 "blocks": 130560, 00:17:24.325 "percent": 68 00:17:24.325 } 00:17:24.325 }, 00:17:24.325 "base_bdevs_list": [ 00:17:24.325 { 00:17:24.325 "name": "spare", 00:17:24.325 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:24.325 "is_configured": true, 00:17:24.325 "data_offset": 2048, 00:17:24.325 "data_size": 63488 00:17:24.325 }, 00:17:24.325 { 00:17:24.325 "name": "BaseBdev2", 00:17:24.325 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:24.325 "is_configured": true, 00:17:24.325 "data_offset": 2048, 00:17:24.325 "data_size": 63488 00:17:24.325 }, 00:17:24.325 { 00:17:24.325 "name": "BaseBdev3", 00:17:24.325 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:24.325 "is_configured": true, 00:17:24.325 "data_offset": 2048, 00:17:24.325 "data_size": 63488 00:17:24.325 }, 00:17:24.325 { 00:17:24.325 "name": "BaseBdev4", 00:17:24.325 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:24.325 "is_configured": true, 00:17:24.325 "data_offset": 2048, 00:17:24.325 "data_size": 63488 00:17:24.325 } 00:17:24.325 ] 00:17:24.325 }' 00:17:24.325 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.325 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.325 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.325 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.325 16:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.263 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.522 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.522 "name": "raid_bdev1", 00:17:25.522 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:25.522 "strip_size_kb": 64, 00:17:25.522 "state": "online", 00:17:25.522 "raid_level": "raid5f", 00:17:25.522 "superblock": true, 00:17:25.522 "num_base_bdevs": 4, 00:17:25.522 "num_base_bdevs_discovered": 4, 00:17:25.522 "num_base_bdevs_operational": 4, 00:17:25.522 "process": { 00:17:25.522 "type": "rebuild", 00:17:25.522 "target": "spare", 00:17:25.522 "progress": { 00:17:25.522 "blocks": 151680, 00:17:25.522 "percent": 79 00:17:25.522 } 00:17:25.522 }, 00:17:25.522 "base_bdevs_list": [ 00:17:25.522 { 00:17:25.522 "name": "spare", 00:17:25.522 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:25.522 "is_configured": true, 00:17:25.522 "data_offset": 2048, 00:17:25.522 "data_size": 63488 00:17:25.522 }, 00:17:25.522 { 00:17:25.522 "name": "BaseBdev2", 00:17:25.522 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:25.522 "is_configured": true, 00:17:25.522 "data_offset": 2048, 00:17:25.522 "data_size": 63488 00:17:25.522 }, 00:17:25.522 { 00:17:25.523 "name": "BaseBdev3", 00:17:25.523 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:25.523 "is_configured": true, 00:17:25.523 "data_offset": 2048, 00:17:25.523 "data_size": 63488 00:17:25.523 }, 00:17:25.523 { 00:17:25.523 "name": "BaseBdev4", 00:17:25.523 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:25.523 "is_configured": true, 00:17:25.523 "data_offset": 2048, 00:17:25.523 "data_size": 63488 00:17:25.523 } 00:17:25.523 ] 00:17:25.523 }' 00:17:25.523 16:18:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.523 16:18:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.523 16:18:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.523 16:18:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.523 16:18:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.461 "name": "raid_bdev1", 00:17:26.461 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:26.461 "strip_size_kb": 64, 00:17:26.461 "state": "online", 00:17:26.461 "raid_level": "raid5f", 00:17:26.461 "superblock": true, 00:17:26.461 "num_base_bdevs": 4, 00:17:26.461 "num_base_bdevs_discovered": 4, 00:17:26.461 "num_base_bdevs_operational": 4, 00:17:26.461 "process": { 00:17:26.461 "type": "rebuild", 00:17:26.461 "target": "spare", 00:17:26.461 "progress": { 00:17:26.461 "blocks": 174720, 00:17:26.461 "percent": 91 00:17:26.461 } 00:17:26.461 }, 00:17:26.461 "base_bdevs_list": [ 00:17:26.461 { 00:17:26.461 "name": "spare", 00:17:26.461 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:26.461 "is_configured": true, 00:17:26.461 "data_offset": 2048, 00:17:26.461 "data_size": 63488 00:17:26.461 }, 00:17:26.461 { 00:17:26.461 "name": "BaseBdev2", 00:17:26.461 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:26.461 "is_configured": true, 00:17:26.461 "data_offset": 2048, 00:17:26.461 "data_size": 63488 00:17:26.461 }, 00:17:26.461 { 00:17:26.461 "name": "BaseBdev3", 00:17:26.461 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:26.461 "is_configured": true, 00:17:26.461 "data_offset": 2048, 00:17:26.461 "data_size": 63488 00:17:26.461 }, 00:17:26.461 { 00:17:26.461 "name": "BaseBdev4", 00:17:26.461 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:26.461 "is_configured": true, 00:17:26.461 "data_offset": 2048, 00:17:26.461 "data_size": 63488 00:17:26.461 } 00:17:26.461 ] 00:17:26.461 }' 00:17:26.461 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.720 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.720 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.720 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.721 16:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.289 [2024-09-28 16:18:41.957958] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:27.289 [2024-09-28 16:18:41.958047] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:27.289 [2024-09-28 16:18:41.958172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.549 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.809 "name": "raid_bdev1", 00:17:27.809 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:27.809 "strip_size_kb": 64, 00:17:27.809 "state": "online", 00:17:27.809 "raid_level": "raid5f", 00:17:27.809 "superblock": true, 00:17:27.809 "num_base_bdevs": 4, 00:17:27.809 "num_base_bdevs_discovered": 4, 00:17:27.809 "num_base_bdevs_operational": 4, 00:17:27.809 "base_bdevs_list": [ 00:17:27.809 { 00:17:27.809 "name": "spare", 00:17:27.809 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 }, 00:17:27.809 { 00:17:27.809 "name": "BaseBdev2", 00:17:27.809 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 }, 00:17:27.809 { 00:17:27.809 "name": "BaseBdev3", 00:17:27.809 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 }, 00:17:27.809 { 00:17:27.809 "name": "BaseBdev4", 00:17:27.809 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 } 00:17:27.809 ] 00:17:27.809 }' 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.809 "name": "raid_bdev1", 00:17:27.809 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:27.809 "strip_size_kb": 64, 00:17:27.809 "state": "online", 00:17:27.809 "raid_level": "raid5f", 00:17:27.809 "superblock": true, 00:17:27.809 "num_base_bdevs": 4, 00:17:27.809 "num_base_bdevs_discovered": 4, 00:17:27.809 "num_base_bdevs_operational": 4, 00:17:27.809 "base_bdevs_list": [ 00:17:27.809 { 00:17:27.809 "name": "spare", 00:17:27.809 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 }, 00:17:27.809 { 00:17:27.809 "name": "BaseBdev2", 00:17:27.809 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 }, 00:17:27.809 { 00:17:27.809 "name": "BaseBdev3", 00:17:27.809 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 }, 00:17:27.809 { 00:17:27.809 "name": "BaseBdev4", 00:17:27.809 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:27.809 "is_configured": true, 00:17:27.809 "data_offset": 2048, 00:17:27.809 "data_size": 63488 00:17:27.809 } 00:17:27.809 ] 00:17:27.809 }' 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.809 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.069 "name": "raid_bdev1", 00:17:28.069 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:28.069 "strip_size_kb": 64, 00:17:28.069 "state": "online", 00:17:28.069 "raid_level": "raid5f", 00:17:28.069 "superblock": true, 00:17:28.069 "num_base_bdevs": 4, 00:17:28.069 "num_base_bdevs_discovered": 4, 00:17:28.069 "num_base_bdevs_operational": 4, 00:17:28.069 "base_bdevs_list": [ 00:17:28.069 { 00:17:28.069 "name": "spare", 00:17:28.069 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:28.069 "is_configured": true, 00:17:28.069 "data_offset": 2048, 00:17:28.069 "data_size": 63488 00:17:28.069 }, 00:17:28.069 { 00:17:28.069 "name": "BaseBdev2", 00:17:28.069 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:28.069 "is_configured": true, 00:17:28.069 "data_offset": 2048, 00:17:28.069 "data_size": 63488 00:17:28.069 }, 00:17:28.069 { 00:17:28.069 "name": "BaseBdev3", 00:17:28.069 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:28.069 "is_configured": true, 00:17:28.069 "data_offset": 2048, 00:17:28.069 "data_size": 63488 00:17:28.069 }, 00:17:28.069 { 00:17:28.069 "name": "BaseBdev4", 00:17:28.069 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:28.069 "is_configured": true, 00:17:28.069 "data_offset": 2048, 00:17:28.069 "data_size": 63488 00:17:28.069 } 00:17:28.069 ] 00:17:28.069 }' 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.069 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.329 [2024-09-28 16:18:42.939773] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.329 [2024-09-28 16:18:42.939811] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.329 [2024-09-28 16:18:42.939900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.329 [2024-09-28 16:18:42.940006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.329 [2024-09-28 16:18:42.940022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:28.329 16:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:28.589 /dev/nbd0 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.589 1+0 records in 00:17:28.589 1+0 records out 00:17:28.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400225 s, 10.2 MB/s 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:28.589 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:28.849 /dev/nbd1 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.849 1+0 records in 00:17:28.849 1+0 records out 00:17:28.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410613 s, 10.0 MB/s 00:17:28.849 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.109 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.369 16:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.629 [2024-09-28 16:18:44.172558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:29.629 [2024-09-28 16:18:44.172629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.629 [2024-09-28 16:18:44.172656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:29.629 [2024-09-28 16:18:44.172665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.629 [2024-09-28 16:18:44.175169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.629 [2024-09-28 16:18:44.175205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:29.629 [2024-09-28 16:18:44.175315] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:29.629 [2024-09-28 16:18:44.175370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.629 [2024-09-28 16:18:44.175525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.629 [2024-09-28 16:18:44.175625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.629 [2024-09-28 16:18:44.175719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:29.629 spare 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.629 [2024-09-28 16:18:44.275628] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:29.629 [2024-09-28 16:18:44.275658] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:29.629 [2024-09-28 16:18:44.275962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:29.629 [2024-09-28 16:18:44.282369] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:29.629 [2024-09-28 16:18:44.282392] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:29.629 [2024-09-28 16:18:44.282560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.629 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.630 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.889 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.889 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.889 "name": "raid_bdev1", 00:17:29.889 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:29.889 "strip_size_kb": 64, 00:17:29.889 "state": "online", 00:17:29.889 "raid_level": "raid5f", 00:17:29.889 "superblock": true, 00:17:29.889 "num_base_bdevs": 4, 00:17:29.889 "num_base_bdevs_discovered": 4, 00:17:29.889 "num_base_bdevs_operational": 4, 00:17:29.889 "base_bdevs_list": [ 00:17:29.889 { 00:17:29.889 "name": "spare", 00:17:29.889 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:29.889 "is_configured": true, 00:17:29.889 "data_offset": 2048, 00:17:29.889 "data_size": 63488 00:17:29.889 }, 00:17:29.889 { 00:17:29.889 "name": "BaseBdev2", 00:17:29.889 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:29.889 "is_configured": true, 00:17:29.889 "data_offset": 2048, 00:17:29.889 "data_size": 63488 00:17:29.889 }, 00:17:29.889 { 00:17:29.889 "name": "BaseBdev3", 00:17:29.889 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:29.889 "is_configured": true, 00:17:29.889 "data_offset": 2048, 00:17:29.889 "data_size": 63488 00:17:29.889 }, 00:17:29.889 { 00:17:29.889 "name": "BaseBdev4", 00:17:29.889 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:29.889 "is_configured": true, 00:17:29.889 "data_offset": 2048, 00:17:29.889 "data_size": 63488 00:17:29.889 } 00:17:29.889 ] 00:17:29.889 }' 00:17:29.889 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.890 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.150 "name": "raid_bdev1", 00:17:30.150 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:30.150 "strip_size_kb": 64, 00:17:30.150 "state": "online", 00:17:30.150 "raid_level": "raid5f", 00:17:30.150 "superblock": true, 00:17:30.150 "num_base_bdevs": 4, 00:17:30.150 "num_base_bdevs_discovered": 4, 00:17:30.150 "num_base_bdevs_operational": 4, 00:17:30.150 "base_bdevs_list": [ 00:17:30.150 { 00:17:30.150 "name": "spare", 00:17:30.150 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:30.150 "is_configured": true, 00:17:30.150 "data_offset": 2048, 00:17:30.150 "data_size": 63488 00:17:30.150 }, 00:17:30.150 { 00:17:30.150 "name": "BaseBdev2", 00:17:30.150 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:30.150 "is_configured": true, 00:17:30.150 "data_offset": 2048, 00:17:30.150 "data_size": 63488 00:17:30.150 }, 00:17:30.150 { 00:17:30.150 "name": "BaseBdev3", 00:17:30.150 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:30.150 "is_configured": true, 00:17:30.150 "data_offset": 2048, 00:17:30.150 "data_size": 63488 00:17:30.150 }, 00:17:30.150 { 00:17:30.150 "name": "BaseBdev4", 00:17:30.150 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:30.150 "is_configured": true, 00:17:30.150 "data_offset": 2048, 00:17:30.150 "data_size": 63488 00:17:30.150 } 00:17:30.150 ] 00:17:30.150 }' 00:17:30.150 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.410 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.411 [2024-09-28 16:18:44.970572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.411 16:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.411 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.411 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.411 "name": "raid_bdev1", 00:17:30.411 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:30.411 "strip_size_kb": 64, 00:17:30.411 "state": "online", 00:17:30.411 "raid_level": "raid5f", 00:17:30.411 "superblock": true, 00:17:30.411 "num_base_bdevs": 4, 00:17:30.411 "num_base_bdevs_discovered": 3, 00:17:30.411 "num_base_bdevs_operational": 3, 00:17:30.411 "base_bdevs_list": [ 00:17:30.411 { 00:17:30.411 "name": null, 00:17:30.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.411 "is_configured": false, 00:17:30.411 "data_offset": 0, 00:17:30.411 "data_size": 63488 00:17:30.411 }, 00:17:30.411 { 00:17:30.411 "name": "BaseBdev2", 00:17:30.411 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:30.411 "is_configured": true, 00:17:30.411 "data_offset": 2048, 00:17:30.411 "data_size": 63488 00:17:30.411 }, 00:17:30.411 { 00:17:30.411 "name": "BaseBdev3", 00:17:30.411 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:30.411 "is_configured": true, 00:17:30.411 "data_offset": 2048, 00:17:30.411 "data_size": 63488 00:17:30.411 }, 00:17:30.411 { 00:17:30.411 "name": "BaseBdev4", 00:17:30.411 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:30.411 "is_configured": true, 00:17:30.411 "data_offset": 2048, 00:17:30.411 "data_size": 63488 00:17:30.411 } 00:17:30.411 ] 00:17:30.411 }' 00:17:30.411 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.411 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.981 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:30.981 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.981 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.981 [2024-09-28 16:18:45.445801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:30.981 [2024-09-28 16:18:45.446010] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:30.981 [2024-09-28 16:18:45.446073] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:30.981 [2024-09-28 16:18:45.446146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:30.981 [2024-09-28 16:18:45.460259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:30.981 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.981 16:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:30.981 [2024-09-28 16:18:45.469971] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.920 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.920 "name": "raid_bdev1", 00:17:31.920 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:31.920 "strip_size_kb": 64, 00:17:31.920 "state": "online", 00:17:31.920 "raid_level": "raid5f", 00:17:31.920 "superblock": true, 00:17:31.920 "num_base_bdevs": 4, 00:17:31.920 "num_base_bdevs_discovered": 4, 00:17:31.920 "num_base_bdevs_operational": 4, 00:17:31.920 "process": { 00:17:31.920 "type": "rebuild", 00:17:31.920 "target": "spare", 00:17:31.920 "progress": { 00:17:31.920 "blocks": 19200, 00:17:31.920 "percent": 10 00:17:31.920 } 00:17:31.920 }, 00:17:31.920 "base_bdevs_list": [ 00:17:31.920 { 00:17:31.920 "name": "spare", 00:17:31.920 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:31.920 "is_configured": true, 00:17:31.920 "data_offset": 2048, 00:17:31.920 "data_size": 63488 00:17:31.920 }, 00:17:31.920 { 00:17:31.920 "name": "BaseBdev2", 00:17:31.920 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:31.920 "is_configured": true, 00:17:31.920 "data_offset": 2048, 00:17:31.920 "data_size": 63488 00:17:31.920 }, 00:17:31.920 { 00:17:31.920 "name": "BaseBdev3", 00:17:31.920 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:31.920 "is_configured": true, 00:17:31.920 "data_offset": 2048, 00:17:31.920 "data_size": 63488 00:17:31.920 }, 00:17:31.920 { 00:17:31.920 "name": "BaseBdev4", 00:17:31.920 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:31.921 "is_configured": true, 00:17:31.921 "data_offset": 2048, 00:17:31.921 "data_size": 63488 00:17:31.921 } 00:17:31.921 ] 00:17:31.921 }' 00:17:31.921 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.921 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.921 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.181 [2024-09-28 16:18:46.620843] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.181 [2024-09-28 16:18:46.676851] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:32.181 [2024-09-28 16:18:46.676914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.181 [2024-09-28 16:18:46.676930] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.181 [2024-09-28 16:18:46.676940] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.181 "name": "raid_bdev1", 00:17:32.181 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:32.181 "strip_size_kb": 64, 00:17:32.181 "state": "online", 00:17:32.181 "raid_level": "raid5f", 00:17:32.181 "superblock": true, 00:17:32.181 "num_base_bdevs": 4, 00:17:32.181 "num_base_bdevs_discovered": 3, 00:17:32.181 "num_base_bdevs_operational": 3, 00:17:32.181 "base_bdevs_list": [ 00:17:32.181 { 00:17:32.181 "name": null, 00:17:32.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.181 "is_configured": false, 00:17:32.181 "data_offset": 0, 00:17:32.181 "data_size": 63488 00:17:32.181 }, 00:17:32.181 { 00:17:32.181 "name": "BaseBdev2", 00:17:32.181 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:32.181 "is_configured": true, 00:17:32.181 "data_offset": 2048, 00:17:32.181 "data_size": 63488 00:17:32.181 }, 00:17:32.181 { 00:17:32.181 "name": "BaseBdev3", 00:17:32.181 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:32.181 "is_configured": true, 00:17:32.181 "data_offset": 2048, 00:17:32.181 "data_size": 63488 00:17:32.181 }, 00:17:32.181 { 00:17:32.181 "name": "BaseBdev4", 00:17:32.181 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:32.181 "is_configured": true, 00:17:32.181 "data_offset": 2048, 00:17:32.181 "data_size": 63488 00:17:32.181 } 00:17:32.181 ] 00:17:32.181 }' 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.181 16:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.752 16:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:32.752 16:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.752 16:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.752 [2024-09-28 16:18:47.168000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:32.752 [2024-09-28 16:18:47.168115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.752 [2024-09-28 16:18:47.168152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:32.752 [2024-09-28 16:18:47.168165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.752 [2024-09-28 16:18:47.168731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.752 [2024-09-28 16:18:47.168753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:32.752 [2024-09-28 16:18:47.168848] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:32.752 [2024-09-28 16:18:47.168864] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:32.752 [2024-09-28 16:18:47.168874] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:32.752 [2024-09-28 16:18:47.168898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.752 spare 00:17:32.752 [2024-09-28 16:18:47.182775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:32.752 16:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.752 16:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:32.752 [2024-09-28 16:18:47.191495] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.692 "name": "raid_bdev1", 00:17:33.692 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:33.692 "strip_size_kb": 64, 00:17:33.692 "state": "online", 00:17:33.692 "raid_level": "raid5f", 00:17:33.692 "superblock": true, 00:17:33.692 "num_base_bdevs": 4, 00:17:33.692 "num_base_bdevs_discovered": 4, 00:17:33.692 "num_base_bdevs_operational": 4, 00:17:33.692 "process": { 00:17:33.692 "type": "rebuild", 00:17:33.692 "target": "spare", 00:17:33.692 "progress": { 00:17:33.692 "blocks": 19200, 00:17:33.692 "percent": 10 00:17:33.692 } 00:17:33.692 }, 00:17:33.692 "base_bdevs_list": [ 00:17:33.692 { 00:17:33.692 "name": "spare", 00:17:33.692 "uuid": "c6adee12-a899-5e4a-8006-38ed5fc52feb", 00:17:33.692 "is_configured": true, 00:17:33.692 "data_offset": 2048, 00:17:33.692 "data_size": 63488 00:17:33.692 }, 00:17:33.692 { 00:17:33.692 "name": "BaseBdev2", 00:17:33.692 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:33.692 "is_configured": true, 00:17:33.692 "data_offset": 2048, 00:17:33.692 "data_size": 63488 00:17:33.692 }, 00:17:33.692 { 00:17:33.692 "name": "BaseBdev3", 00:17:33.692 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:33.692 "is_configured": true, 00:17:33.692 "data_offset": 2048, 00:17:33.692 "data_size": 63488 00:17:33.692 }, 00:17:33.692 { 00:17:33.692 "name": "BaseBdev4", 00:17:33.692 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:33.692 "is_configured": true, 00:17:33.692 "data_offset": 2048, 00:17:33.692 "data_size": 63488 00:17:33.692 } 00:17:33.692 ] 00:17:33.692 }' 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.692 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.692 [2024-09-28 16:18:48.351120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.952 [2024-09-28 16:18:48.398387] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:33.952 [2024-09-28 16:18:48.398436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.952 [2024-09-28 16:18:48.398456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.952 [2024-09-28 16:18:48.398463] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.952 "name": "raid_bdev1", 00:17:33.952 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:33.952 "strip_size_kb": 64, 00:17:33.952 "state": "online", 00:17:33.952 "raid_level": "raid5f", 00:17:33.952 "superblock": true, 00:17:33.952 "num_base_bdevs": 4, 00:17:33.952 "num_base_bdevs_discovered": 3, 00:17:33.952 "num_base_bdevs_operational": 3, 00:17:33.952 "base_bdevs_list": [ 00:17:33.952 { 00:17:33.952 "name": null, 00:17:33.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.952 "is_configured": false, 00:17:33.952 "data_offset": 0, 00:17:33.952 "data_size": 63488 00:17:33.952 }, 00:17:33.952 { 00:17:33.952 "name": "BaseBdev2", 00:17:33.952 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:33.952 "is_configured": true, 00:17:33.952 "data_offset": 2048, 00:17:33.952 "data_size": 63488 00:17:33.952 }, 00:17:33.952 { 00:17:33.952 "name": "BaseBdev3", 00:17:33.952 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:33.952 "is_configured": true, 00:17:33.952 "data_offset": 2048, 00:17:33.952 "data_size": 63488 00:17:33.952 }, 00:17:33.952 { 00:17:33.952 "name": "BaseBdev4", 00:17:33.952 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:33.952 "is_configured": true, 00:17:33.952 "data_offset": 2048, 00:17:33.952 "data_size": 63488 00:17:33.952 } 00:17:33.952 ] 00:17:33.952 }' 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.952 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.212 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.212 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.212 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.212 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.212 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.473 "name": "raid_bdev1", 00:17:34.473 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:34.473 "strip_size_kb": 64, 00:17:34.473 "state": "online", 00:17:34.473 "raid_level": "raid5f", 00:17:34.473 "superblock": true, 00:17:34.473 "num_base_bdevs": 4, 00:17:34.473 "num_base_bdevs_discovered": 3, 00:17:34.473 "num_base_bdevs_operational": 3, 00:17:34.473 "base_bdevs_list": [ 00:17:34.473 { 00:17:34.473 "name": null, 00:17:34.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.473 "is_configured": false, 00:17:34.473 "data_offset": 0, 00:17:34.473 "data_size": 63488 00:17:34.473 }, 00:17:34.473 { 00:17:34.473 "name": "BaseBdev2", 00:17:34.473 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:34.473 "is_configured": true, 00:17:34.473 "data_offset": 2048, 00:17:34.473 "data_size": 63488 00:17:34.473 }, 00:17:34.473 { 00:17:34.473 "name": "BaseBdev3", 00:17:34.473 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:34.473 "is_configured": true, 00:17:34.473 "data_offset": 2048, 00:17:34.473 "data_size": 63488 00:17:34.473 }, 00:17:34.473 { 00:17:34.473 "name": "BaseBdev4", 00:17:34.473 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:34.473 "is_configured": true, 00:17:34.473 "data_offset": 2048, 00:17:34.473 "data_size": 63488 00:17:34.473 } 00:17:34.473 ] 00:17:34.473 }' 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.473 16:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.473 [2024-09-28 16:18:49.068137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:34.473 [2024-09-28 16:18:49.068195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.473 [2024-09-28 16:18:49.068231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:34.473 [2024-09-28 16:18:49.068242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.473 [2024-09-28 16:18:49.068787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.473 [2024-09-28 16:18:49.068814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:34.473 [2024-09-28 16:18:49.068901] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:34.473 [2024-09-28 16:18:49.068922] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:34.473 [2024-09-28 16:18:49.068936] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:34.473 [2024-09-28 16:18:49.068949] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:34.473 BaseBdev1 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.473 16:18:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.414 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.674 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.674 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.674 "name": "raid_bdev1", 00:17:35.674 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:35.674 "strip_size_kb": 64, 00:17:35.674 "state": "online", 00:17:35.674 "raid_level": "raid5f", 00:17:35.674 "superblock": true, 00:17:35.674 "num_base_bdevs": 4, 00:17:35.674 "num_base_bdevs_discovered": 3, 00:17:35.674 "num_base_bdevs_operational": 3, 00:17:35.674 "base_bdevs_list": [ 00:17:35.674 { 00:17:35.674 "name": null, 00:17:35.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.674 "is_configured": false, 00:17:35.674 "data_offset": 0, 00:17:35.674 "data_size": 63488 00:17:35.674 }, 00:17:35.674 { 00:17:35.674 "name": "BaseBdev2", 00:17:35.674 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:35.674 "is_configured": true, 00:17:35.674 "data_offset": 2048, 00:17:35.674 "data_size": 63488 00:17:35.674 }, 00:17:35.674 { 00:17:35.674 "name": "BaseBdev3", 00:17:35.674 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:35.674 "is_configured": true, 00:17:35.674 "data_offset": 2048, 00:17:35.674 "data_size": 63488 00:17:35.674 }, 00:17:35.674 { 00:17:35.674 "name": "BaseBdev4", 00:17:35.674 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:35.674 "is_configured": true, 00:17:35.674 "data_offset": 2048, 00:17:35.674 "data_size": 63488 00:17:35.674 } 00:17:35.674 ] 00:17:35.674 }' 00:17:35.674 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.674 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.934 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.934 "name": "raid_bdev1", 00:17:35.934 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:35.934 "strip_size_kb": 64, 00:17:35.934 "state": "online", 00:17:35.934 "raid_level": "raid5f", 00:17:35.934 "superblock": true, 00:17:35.934 "num_base_bdevs": 4, 00:17:35.934 "num_base_bdevs_discovered": 3, 00:17:35.934 "num_base_bdevs_operational": 3, 00:17:35.934 "base_bdevs_list": [ 00:17:35.934 { 00:17:35.934 "name": null, 00:17:35.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.935 "is_configured": false, 00:17:35.935 "data_offset": 0, 00:17:35.935 "data_size": 63488 00:17:35.935 }, 00:17:35.935 { 00:17:35.935 "name": "BaseBdev2", 00:17:35.935 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:35.935 "is_configured": true, 00:17:35.935 "data_offset": 2048, 00:17:35.935 "data_size": 63488 00:17:35.935 }, 00:17:35.935 { 00:17:35.935 "name": "BaseBdev3", 00:17:35.935 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:35.935 "is_configured": true, 00:17:35.935 "data_offset": 2048, 00:17:35.935 "data_size": 63488 00:17:35.935 }, 00:17:35.935 { 00:17:35.935 "name": "BaseBdev4", 00:17:35.935 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:35.935 "is_configured": true, 00:17:35.935 "data_offset": 2048, 00:17:35.935 "data_size": 63488 00:17:35.935 } 00:17:35.935 ] 00:17:35.935 }' 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.935 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.195 [2024-09-28 16:18:50.621497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.195 [2024-09-28 16:18:50.621673] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:36.195 [2024-09-28 16:18:50.621714] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:36.195 request: 00:17:36.195 { 00:17:36.195 "base_bdev": "BaseBdev1", 00:17:36.195 "raid_bdev": "raid_bdev1", 00:17:36.195 "method": "bdev_raid_add_base_bdev", 00:17:36.195 "req_id": 1 00:17:36.195 } 00:17:36.195 Got JSON-RPC error response 00:17:36.195 response: 00:17:36.195 { 00:17:36.195 "code": -22, 00:17:36.195 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:36.195 } 00:17:36.195 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:36.195 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:36.195 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:36.195 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:36.195 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:36.195 16:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.136 "name": "raid_bdev1", 00:17:37.136 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:37.136 "strip_size_kb": 64, 00:17:37.136 "state": "online", 00:17:37.136 "raid_level": "raid5f", 00:17:37.136 "superblock": true, 00:17:37.136 "num_base_bdevs": 4, 00:17:37.136 "num_base_bdevs_discovered": 3, 00:17:37.136 "num_base_bdevs_operational": 3, 00:17:37.136 "base_bdevs_list": [ 00:17:37.136 { 00:17:37.136 "name": null, 00:17:37.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.136 "is_configured": false, 00:17:37.136 "data_offset": 0, 00:17:37.136 "data_size": 63488 00:17:37.136 }, 00:17:37.136 { 00:17:37.136 "name": "BaseBdev2", 00:17:37.136 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:37.136 "is_configured": true, 00:17:37.136 "data_offset": 2048, 00:17:37.136 "data_size": 63488 00:17:37.136 }, 00:17:37.136 { 00:17:37.136 "name": "BaseBdev3", 00:17:37.136 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:37.136 "is_configured": true, 00:17:37.136 "data_offset": 2048, 00:17:37.136 "data_size": 63488 00:17:37.136 }, 00:17:37.136 { 00:17:37.136 "name": "BaseBdev4", 00:17:37.136 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:37.136 "is_configured": true, 00:17:37.136 "data_offset": 2048, 00:17:37.136 "data_size": 63488 00:17:37.136 } 00:17:37.136 ] 00:17:37.136 }' 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.136 16:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.706 "name": "raid_bdev1", 00:17:37.706 "uuid": "d6cc7907-64fc-4948-84be-0666f048de4c", 00:17:37.706 "strip_size_kb": 64, 00:17:37.706 "state": "online", 00:17:37.706 "raid_level": "raid5f", 00:17:37.706 "superblock": true, 00:17:37.706 "num_base_bdevs": 4, 00:17:37.706 "num_base_bdevs_discovered": 3, 00:17:37.706 "num_base_bdevs_operational": 3, 00:17:37.706 "base_bdevs_list": [ 00:17:37.706 { 00:17:37.706 "name": null, 00:17:37.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.706 "is_configured": false, 00:17:37.706 "data_offset": 0, 00:17:37.706 "data_size": 63488 00:17:37.706 }, 00:17:37.706 { 00:17:37.706 "name": "BaseBdev2", 00:17:37.706 "uuid": "8d785d79-9108-5149-baa9-af72801ee089", 00:17:37.706 "is_configured": true, 00:17:37.706 "data_offset": 2048, 00:17:37.706 "data_size": 63488 00:17:37.706 }, 00:17:37.706 { 00:17:37.706 "name": "BaseBdev3", 00:17:37.706 "uuid": "52f0de83-f0f1-5f9b-b806-0d8510a278f2", 00:17:37.706 "is_configured": true, 00:17:37.706 "data_offset": 2048, 00:17:37.706 "data_size": 63488 00:17:37.706 }, 00:17:37.706 { 00:17:37.706 "name": "BaseBdev4", 00:17:37.706 "uuid": "e3d7a65b-90a7-57f5-8fc0-2841a0bc30ff", 00:17:37.706 "is_configured": true, 00:17:37.706 "data_offset": 2048, 00:17:37.706 "data_size": 63488 00:17:37.706 } 00:17:37.706 ] 00:17:37.706 }' 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85122 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85122 ']' 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85122 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.706 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85122 00:17:37.707 killing process with pid 85122 00:17:37.707 Received shutdown signal, test time was about 60.000000 seconds 00:17:37.707 00:17:37.707 Latency(us) 00:17:37.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.707 =================================================================================================================== 00:17:37.707 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:37.707 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.707 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.707 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85122' 00:17:37.707 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85122 00:17:37.707 [2024-09-28 16:18:52.285302] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.707 [2024-09-28 16:18:52.285433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.707 16:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85122 00:17:37.707 [2024-09-28 16:18:52.285515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.707 [2024-09-28 16:18:52.285528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:38.276 [2024-09-28 16:18:52.786870] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.657 16:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:39.657 00:17:39.657 real 0m27.313s 00:17:39.657 user 0m34.269s 00:17:39.657 sys 0m3.094s 00:17:39.657 16:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:39.657 16:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.657 ************************************ 00:17:39.657 END TEST raid5f_rebuild_test_sb 00:17:39.657 ************************************ 00:17:39.657 16:18:54 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:39.657 16:18:54 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:39.657 16:18:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:39.657 16:18:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:39.657 16:18:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:39.657 ************************************ 00:17:39.657 START TEST raid_state_function_test_sb_4k 00:17:39.657 ************************************ 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:39.657 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85932 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:39.658 Process raid pid: 85932 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85932' 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85932 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 85932 ']' 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.658 16:18:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.658 [2024-09-28 16:18:54.281036] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:39.658 [2024-09-28 16:18:54.281147] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.918 [2024-09-28 16:18:54.445855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.178 [2024-09-28 16:18:54.694946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.438 [2024-09-28 16:18:54.927493] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.438 [2024-09-28 16:18:54.927532] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.438 [2024-09-28 16:18:55.110378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.438 [2024-09-28 16:18:55.110431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.438 [2024-09-28 16:18:55.110441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.438 [2024-09-28 16:18:55.110451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.438 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.698 "name": "Existed_Raid", 00:17:40.698 "uuid": "036580d1-2e42-463f-a9bb-6898cebc018e", 00:17:40.698 "strip_size_kb": 0, 00:17:40.698 "state": "configuring", 00:17:40.698 "raid_level": "raid1", 00:17:40.698 "superblock": true, 00:17:40.698 "num_base_bdevs": 2, 00:17:40.698 "num_base_bdevs_discovered": 0, 00:17:40.698 "num_base_bdevs_operational": 2, 00:17:40.698 "base_bdevs_list": [ 00:17:40.698 { 00:17:40.698 "name": "BaseBdev1", 00:17:40.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.698 "is_configured": false, 00:17:40.698 "data_offset": 0, 00:17:40.698 "data_size": 0 00:17:40.698 }, 00:17:40.698 { 00:17:40.698 "name": "BaseBdev2", 00:17:40.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.698 "is_configured": false, 00:17:40.698 "data_offset": 0, 00:17:40.698 "data_size": 0 00:17:40.698 } 00:17:40.698 ] 00:17:40.698 }' 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.698 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.959 [2024-09-28 16:18:55.561479] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.959 [2024-09-28 16:18:55.561520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.959 [2024-09-28 16:18:55.573481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.959 [2024-09-28 16:18:55.573517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.959 [2024-09-28 16:18:55.573526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.959 [2024-09-28 16:18:55.573538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.959 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.224 [2024-09-28 16:18:55.660841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.224 BaseBdev1 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.224 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.224 [ 00:17:41.224 { 00:17:41.224 "name": "BaseBdev1", 00:17:41.224 "aliases": [ 00:17:41.225 "d7eccba9-9629-4d93-a651-bd985ffd5417" 00:17:41.225 ], 00:17:41.225 "product_name": "Malloc disk", 00:17:41.225 "block_size": 4096, 00:17:41.225 "num_blocks": 8192, 00:17:41.225 "uuid": "d7eccba9-9629-4d93-a651-bd985ffd5417", 00:17:41.225 "assigned_rate_limits": { 00:17:41.225 "rw_ios_per_sec": 0, 00:17:41.225 "rw_mbytes_per_sec": 0, 00:17:41.225 "r_mbytes_per_sec": 0, 00:17:41.225 "w_mbytes_per_sec": 0 00:17:41.225 }, 00:17:41.225 "claimed": true, 00:17:41.225 "claim_type": "exclusive_write", 00:17:41.225 "zoned": false, 00:17:41.225 "supported_io_types": { 00:17:41.225 "read": true, 00:17:41.225 "write": true, 00:17:41.225 "unmap": true, 00:17:41.225 "flush": true, 00:17:41.225 "reset": true, 00:17:41.225 "nvme_admin": false, 00:17:41.225 "nvme_io": false, 00:17:41.225 "nvme_io_md": false, 00:17:41.225 "write_zeroes": true, 00:17:41.225 "zcopy": true, 00:17:41.225 "get_zone_info": false, 00:17:41.225 "zone_management": false, 00:17:41.225 "zone_append": false, 00:17:41.225 "compare": false, 00:17:41.225 "compare_and_write": false, 00:17:41.225 "abort": true, 00:17:41.225 "seek_hole": false, 00:17:41.225 "seek_data": false, 00:17:41.225 "copy": true, 00:17:41.225 "nvme_iov_md": false 00:17:41.225 }, 00:17:41.225 "memory_domains": [ 00:17:41.225 { 00:17:41.225 "dma_device_id": "system", 00:17:41.225 "dma_device_type": 1 00:17:41.225 }, 00:17:41.225 { 00:17:41.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.225 "dma_device_type": 2 00:17:41.225 } 00:17:41.225 ], 00:17:41.225 "driver_specific": {} 00:17:41.225 } 00:17:41.225 ] 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.225 "name": "Existed_Raid", 00:17:41.225 "uuid": "46c22d9c-9b07-41c2-b318-4657858764ec", 00:17:41.225 "strip_size_kb": 0, 00:17:41.225 "state": "configuring", 00:17:41.225 "raid_level": "raid1", 00:17:41.225 "superblock": true, 00:17:41.225 "num_base_bdevs": 2, 00:17:41.225 "num_base_bdevs_discovered": 1, 00:17:41.225 "num_base_bdevs_operational": 2, 00:17:41.225 "base_bdevs_list": [ 00:17:41.225 { 00:17:41.225 "name": "BaseBdev1", 00:17:41.225 "uuid": "d7eccba9-9629-4d93-a651-bd985ffd5417", 00:17:41.225 "is_configured": true, 00:17:41.225 "data_offset": 256, 00:17:41.225 "data_size": 7936 00:17:41.225 }, 00:17:41.225 { 00:17:41.225 "name": "BaseBdev2", 00:17:41.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.225 "is_configured": false, 00:17:41.225 "data_offset": 0, 00:17:41.225 "data_size": 0 00:17:41.225 } 00:17:41.225 ] 00:17:41.225 }' 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.225 16:18:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.511 [2024-09-28 16:18:56.136021] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:41.511 [2024-09-28 16:18:56.136069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.511 [2024-09-28 16:18:56.148048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.511 [2024-09-28 16:18:56.150103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.511 [2024-09-28 16:18:56.150141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.511 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.815 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.815 "name": "Existed_Raid", 00:17:41.815 "uuid": "08f34615-b6f2-4d53-9b64-49fc73389a57", 00:17:41.815 "strip_size_kb": 0, 00:17:41.815 "state": "configuring", 00:17:41.815 "raid_level": "raid1", 00:17:41.815 "superblock": true, 00:17:41.815 "num_base_bdevs": 2, 00:17:41.815 "num_base_bdevs_discovered": 1, 00:17:41.815 "num_base_bdevs_operational": 2, 00:17:41.815 "base_bdevs_list": [ 00:17:41.815 { 00:17:41.815 "name": "BaseBdev1", 00:17:41.815 "uuid": "d7eccba9-9629-4d93-a651-bd985ffd5417", 00:17:41.815 "is_configured": true, 00:17:41.815 "data_offset": 256, 00:17:41.815 "data_size": 7936 00:17:41.815 }, 00:17:41.815 { 00:17:41.815 "name": "BaseBdev2", 00:17:41.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.815 "is_configured": false, 00:17:41.815 "data_offset": 0, 00:17:41.815 "data_size": 0 00:17:41.815 } 00:17:41.815 ] 00:17:41.815 }' 00:17:41.815 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.815 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.094 [2024-09-28 16:18:56.680370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.094 [2024-09-28 16:18:56.680675] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:42.094 [2024-09-28 16:18:56.680700] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.094 [2024-09-28 16:18:56.681016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:42.094 [2024-09-28 16:18:56.681194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:42.094 [2024-09-28 16:18:56.681214] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:42.094 BaseBdev2 00:17:42.094 [2024-09-28 16:18:56.681398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.094 [ 00:17:42.094 { 00:17:42.094 "name": "BaseBdev2", 00:17:42.094 "aliases": [ 00:17:42.094 "4c9a4c3a-4d55-46c9-a652-3f32cf0b41f3" 00:17:42.094 ], 00:17:42.094 "product_name": "Malloc disk", 00:17:42.094 "block_size": 4096, 00:17:42.094 "num_blocks": 8192, 00:17:42.094 "uuid": "4c9a4c3a-4d55-46c9-a652-3f32cf0b41f3", 00:17:42.094 "assigned_rate_limits": { 00:17:42.094 "rw_ios_per_sec": 0, 00:17:42.094 "rw_mbytes_per_sec": 0, 00:17:42.094 "r_mbytes_per_sec": 0, 00:17:42.094 "w_mbytes_per_sec": 0 00:17:42.094 }, 00:17:42.094 "claimed": true, 00:17:42.094 "claim_type": "exclusive_write", 00:17:42.094 "zoned": false, 00:17:42.094 "supported_io_types": { 00:17:42.094 "read": true, 00:17:42.094 "write": true, 00:17:42.094 "unmap": true, 00:17:42.094 "flush": true, 00:17:42.094 "reset": true, 00:17:42.094 "nvme_admin": false, 00:17:42.094 "nvme_io": false, 00:17:42.094 "nvme_io_md": false, 00:17:42.094 "write_zeroes": true, 00:17:42.094 "zcopy": true, 00:17:42.094 "get_zone_info": false, 00:17:42.094 "zone_management": false, 00:17:42.094 "zone_append": false, 00:17:42.094 "compare": false, 00:17:42.094 "compare_and_write": false, 00:17:42.094 "abort": true, 00:17:42.094 "seek_hole": false, 00:17:42.094 "seek_data": false, 00:17:42.094 "copy": true, 00:17:42.094 "nvme_iov_md": false 00:17:42.094 }, 00:17:42.094 "memory_domains": [ 00:17:42.094 { 00:17:42.094 "dma_device_id": "system", 00:17:42.094 "dma_device_type": 1 00:17:42.094 }, 00:17:42.094 { 00:17:42.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.094 "dma_device_type": 2 00:17:42.094 } 00:17:42.094 ], 00:17:42.094 "driver_specific": {} 00:17:42.094 } 00:17:42.094 ] 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.094 "name": "Existed_Raid", 00:17:42.094 "uuid": "08f34615-b6f2-4d53-9b64-49fc73389a57", 00:17:42.094 "strip_size_kb": 0, 00:17:42.094 "state": "online", 00:17:42.094 "raid_level": "raid1", 00:17:42.094 "superblock": true, 00:17:42.094 "num_base_bdevs": 2, 00:17:42.094 "num_base_bdevs_discovered": 2, 00:17:42.094 "num_base_bdevs_operational": 2, 00:17:42.094 "base_bdevs_list": [ 00:17:42.094 { 00:17:42.094 "name": "BaseBdev1", 00:17:42.094 "uuid": "d7eccba9-9629-4d93-a651-bd985ffd5417", 00:17:42.094 "is_configured": true, 00:17:42.094 "data_offset": 256, 00:17:42.094 "data_size": 7936 00:17:42.094 }, 00:17:42.094 { 00:17:42.094 "name": "BaseBdev2", 00:17:42.094 "uuid": "4c9a4c3a-4d55-46c9-a652-3f32cf0b41f3", 00:17:42.094 "is_configured": true, 00:17:42.094 "data_offset": 256, 00:17:42.094 "data_size": 7936 00:17:42.094 } 00:17:42.094 ] 00:17:42.094 }' 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.094 16:18:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.664 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:42.664 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:42.664 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:42.665 [2024-09-28 16:18:57.167787] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:42.665 "name": "Existed_Raid", 00:17:42.665 "aliases": [ 00:17:42.665 "08f34615-b6f2-4d53-9b64-49fc73389a57" 00:17:42.665 ], 00:17:42.665 "product_name": "Raid Volume", 00:17:42.665 "block_size": 4096, 00:17:42.665 "num_blocks": 7936, 00:17:42.665 "uuid": "08f34615-b6f2-4d53-9b64-49fc73389a57", 00:17:42.665 "assigned_rate_limits": { 00:17:42.665 "rw_ios_per_sec": 0, 00:17:42.665 "rw_mbytes_per_sec": 0, 00:17:42.665 "r_mbytes_per_sec": 0, 00:17:42.665 "w_mbytes_per_sec": 0 00:17:42.665 }, 00:17:42.665 "claimed": false, 00:17:42.665 "zoned": false, 00:17:42.665 "supported_io_types": { 00:17:42.665 "read": true, 00:17:42.665 "write": true, 00:17:42.665 "unmap": false, 00:17:42.665 "flush": false, 00:17:42.665 "reset": true, 00:17:42.665 "nvme_admin": false, 00:17:42.665 "nvme_io": false, 00:17:42.665 "nvme_io_md": false, 00:17:42.665 "write_zeroes": true, 00:17:42.665 "zcopy": false, 00:17:42.665 "get_zone_info": false, 00:17:42.665 "zone_management": false, 00:17:42.665 "zone_append": false, 00:17:42.665 "compare": false, 00:17:42.665 "compare_and_write": false, 00:17:42.665 "abort": false, 00:17:42.665 "seek_hole": false, 00:17:42.665 "seek_data": false, 00:17:42.665 "copy": false, 00:17:42.665 "nvme_iov_md": false 00:17:42.665 }, 00:17:42.665 "memory_domains": [ 00:17:42.665 { 00:17:42.665 "dma_device_id": "system", 00:17:42.665 "dma_device_type": 1 00:17:42.665 }, 00:17:42.665 { 00:17:42.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.665 "dma_device_type": 2 00:17:42.665 }, 00:17:42.665 { 00:17:42.665 "dma_device_id": "system", 00:17:42.665 "dma_device_type": 1 00:17:42.665 }, 00:17:42.665 { 00:17:42.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.665 "dma_device_type": 2 00:17:42.665 } 00:17:42.665 ], 00:17:42.665 "driver_specific": { 00:17:42.665 "raid": { 00:17:42.665 "uuid": "08f34615-b6f2-4d53-9b64-49fc73389a57", 00:17:42.665 "strip_size_kb": 0, 00:17:42.665 "state": "online", 00:17:42.665 "raid_level": "raid1", 00:17:42.665 "superblock": true, 00:17:42.665 "num_base_bdevs": 2, 00:17:42.665 "num_base_bdevs_discovered": 2, 00:17:42.665 "num_base_bdevs_operational": 2, 00:17:42.665 "base_bdevs_list": [ 00:17:42.665 { 00:17:42.665 "name": "BaseBdev1", 00:17:42.665 "uuid": "d7eccba9-9629-4d93-a651-bd985ffd5417", 00:17:42.665 "is_configured": true, 00:17:42.665 "data_offset": 256, 00:17:42.665 "data_size": 7936 00:17:42.665 }, 00:17:42.665 { 00:17:42.665 "name": "BaseBdev2", 00:17:42.665 "uuid": "4c9a4c3a-4d55-46c9-a652-3f32cf0b41f3", 00:17:42.665 "is_configured": true, 00:17:42.665 "data_offset": 256, 00:17:42.665 "data_size": 7936 00:17:42.665 } 00:17:42.665 ] 00:17:42.665 } 00:17:42.665 } 00:17:42.665 }' 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:42.665 BaseBdev2' 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.665 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.925 [2024-09-28 16:18:57.395208] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.925 "name": "Existed_Raid", 00:17:42.925 "uuid": "08f34615-b6f2-4d53-9b64-49fc73389a57", 00:17:42.925 "strip_size_kb": 0, 00:17:42.925 "state": "online", 00:17:42.925 "raid_level": "raid1", 00:17:42.925 "superblock": true, 00:17:42.925 "num_base_bdevs": 2, 00:17:42.925 "num_base_bdevs_discovered": 1, 00:17:42.925 "num_base_bdevs_operational": 1, 00:17:42.925 "base_bdevs_list": [ 00:17:42.925 { 00:17:42.925 "name": null, 00:17:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.925 "is_configured": false, 00:17:42.925 "data_offset": 0, 00:17:42.925 "data_size": 7936 00:17:42.925 }, 00:17:42.925 { 00:17:42.925 "name": "BaseBdev2", 00:17:42.925 "uuid": "4c9a4c3a-4d55-46c9-a652-3f32cf0b41f3", 00:17:42.925 "is_configured": true, 00:17:42.925 "data_offset": 256, 00:17:42.925 "data_size": 7936 00:17:42.925 } 00:17:42.925 ] 00:17:42.925 }' 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.925 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.494 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:43.494 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:43.494 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.494 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.494 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.494 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:43.494 16:18:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.494 [2024-09-28 16:18:58.025827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.494 [2024-09-28 16:18:58.025955] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.494 [2024-09-28 16:18:58.127040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.494 [2024-09-28 16:18:58.127098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.494 [2024-09-28 16:18:58.127112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.494 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85932 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 85932 ']' 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 85932 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85932 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.753 killing process with pid 85932 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85932' 00:17:43.753 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 85932 00:17:43.753 [2024-09-28 16:18:58.206321] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.754 16:18:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 85932 00:17:43.754 [2024-09-28 16:18:58.223412] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.137 16:18:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:45.137 00:17:45.137 real 0m5.349s 00:17:45.137 user 0m7.465s 00:17:45.137 sys 0m1.029s 00:17:45.137 16:18:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.137 16:18:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.137 ************************************ 00:17:45.137 END TEST raid_state_function_test_sb_4k 00:17:45.137 ************************************ 00:17:45.137 16:18:59 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:45.137 16:18:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:45.137 16:18:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.137 16:18:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.137 ************************************ 00:17:45.137 START TEST raid_superblock_test_4k 00:17:45.137 ************************************ 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86190 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86190 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86190 ']' 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.137 16:18:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.137 [2024-09-28 16:18:59.697078] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:45.137 [2024-09-28 16:18:59.697182] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86190 ] 00:17:45.399 [2024-09-28 16:18:59.865663] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.658 [2024-09-28 16:19:00.103195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.658 [2024-09-28 16:19:00.334353] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.658 [2024-09-28 16:19:00.334385] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.917 malloc1 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.917 [2024-09-28 16:19:00.568623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:45.917 [2024-09-28 16:19:00.568703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.917 [2024-09-28 16:19:00.568729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:45.917 [2024-09-28 16:19:00.568742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.917 [2024-09-28 16:19:00.571111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.917 [2024-09-28 16:19:00.571143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:45.917 pt1 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.917 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.176 malloc2 00:17:46.176 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.176 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.176 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.176 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.176 [2024-09-28 16:19:00.658924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.176 [2024-09-28 16:19:00.658985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.176 [2024-09-28 16:19:00.659027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:46.176 [2024-09-28 16:19:00.659036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.176 [2024-09-28 16:19:00.661450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.176 [2024-09-28 16:19:00.661493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.176 pt2 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.177 [2024-09-28 16:19:00.670970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.177 [2024-09-28 16:19:00.673075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.177 [2024-09-28 16:19:00.673276] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:46.177 [2024-09-28 16:19:00.673289] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.177 [2024-09-28 16:19:00.673529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:46.177 [2024-09-28 16:19:00.673705] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:46.177 [2024-09-28 16:19:00.673742] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:46.177 [2024-09-28 16:19:00.673876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.177 "name": "raid_bdev1", 00:17:46.177 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:46.177 "strip_size_kb": 0, 00:17:46.177 "state": "online", 00:17:46.177 "raid_level": "raid1", 00:17:46.177 "superblock": true, 00:17:46.177 "num_base_bdevs": 2, 00:17:46.177 "num_base_bdevs_discovered": 2, 00:17:46.177 "num_base_bdevs_operational": 2, 00:17:46.177 "base_bdevs_list": [ 00:17:46.177 { 00:17:46.177 "name": "pt1", 00:17:46.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.177 "is_configured": true, 00:17:46.177 "data_offset": 256, 00:17:46.177 "data_size": 7936 00:17:46.177 }, 00:17:46.177 { 00:17:46.177 "name": "pt2", 00:17:46.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.177 "is_configured": true, 00:17:46.177 "data_offset": 256, 00:17:46.177 "data_size": 7936 00:17:46.177 } 00:17:46.177 ] 00:17:46.177 }' 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.177 16:19:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.746 [2024-09-28 16:19:01.158333] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.746 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.746 "name": "raid_bdev1", 00:17:46.746 "aliases": [ 00:17:46.746 "44c7bd85-acbe-4b70-b38b-1559ba06dbcb" 00:17:46.746 ], 00:17:46.746 "product_name": "Raid Volume", 00:17:46.746 "block_size": 4096, 00:17:46.746 "num_blocks": 7936, 00:17:46.746 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:46.746 "assigned_rate_limits": { 00:17:46.746 "rw_ios_per_sec": 0, 00:17:46.746 "rw_mbytes_per_sec": 0, 00:17:46.746 "r_mbytes_per_sec": 0, 00:17:46.746 "w_mbytes_per_sec": 0 00:17:46.746 }, 00:17:46.746 "claimed": false, 00:17:46.747 "zoned": false, 00:17:46.747 "supported_io_types": { 00:17:46.747 "read": true, 00:17:46.747 "write": true, 00:17:46.747 "unmap": false, 00:17:46.747 "flush": false, 00:17:46.747 "reset": true, 00:17:46.747 "nvme_admin": false, 00:17:46.747 "nvme_io": false, 00:17:46.747 "nvme_io_md": false, 00:17:46.747 "write_zeroes": true, 00:17:46.747 "zcopy": false, 00:17:46.747 "get_zone_info": false, 00:17:46.747 "zone_management": false, 00:17:46.747 "zone_append": false, 00:17:46.747 "compare": false, 00:17:46.747 "compare_and_write": false, 00:17:46.747 "abort": false, 00:17:46.747 "seek_hole": false, 00:17:46.747 "seek_data": false, 00:17:46.747 "copy": false, 00:17:46.747 "nvme_iov_md": false 00:17:46.747 }, 00:17:46.747 "memory_domains": [ 00:17:46.747 { 00:17:46.747 "dma_device_id": "system", 00:17:46.747 "dma_device_type": 1 00:17:46.747 }, 00:17:46.747 { 00:17:46.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.747 "dma_device_type": 2 00:17:46.747 }, 00:17:46.747 { 00:17:46.747 "dma_device_id": "system", 00:17:46.747 "dma_device_type": 1 00:17:46.747 }, 00:17:46.747 { 00:17:46.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.747 "dma_device_type": 2 00:17:46.747 } 00:17:46.747 ], 00:17:46.747 "driver_specific": { 00:17:46.747 "raid": { 00:17:46.747 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:46.747 "strip_size_kb": 0, 00:17:46.747 "state": "online", 00:17:46.747 "raid_level": "raid1", 00:17:46.747 "superblock": true, 00:17:46.747 "num_base_bdevs": 2, 00:17:46.747 "num_base_bdevs_discovered": 2, 00:17:46.747 "num_base_bdevs_operational": 2, 00:17:46.747 "base_bdevs_list": [ 00:17:46.747 { 00:17:46.747 "name": "pt1", 00:17:46.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.747 "is_configured": true, 00:17:46.747 "data_offset": 256, 00:17:46.747 "data_size": 7936 00:17:46.747 }, 00:17:46.747 { 00:17:46.747 "name": "pt2", 00:17:46.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.747 "is_configured": true, 00:17:46.747 "data_offset": 256, 00:17:46.747 "data_size": 7936 00:17:46.747 } 00:17:46.747 ] 00:17:46.747 } 00:17:46.747 } 00:17:46.747 }' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:46.747 pt2' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.747 [2024-09-28 16:19:01.389862] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=44c7bd85-acbe-4b70-b38b-1559ba06dbcb 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 44c7bd85-acbe-4b70-b38b-1559ba06dbcb ']' 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.747 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 [2024-09-28 16:19:01.433573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.006 [2024-09-28 16:19:01.433596] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.006 [2024-09-28 16:19:01.433662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.006 [2024-09-28 16:19:01.433717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.006 [2024-09-28 16:19:01.433729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 [2024-09-28 16:19:01.581350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:47.006 [2024-09-28 16:19:01.583450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:47.006 [2024-09-28 16:19:01.583521] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:47.006 [2024-09-28 16:19:01.583572] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:47.006 [2024-09-28 16:19:01.583586] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.006 [2024-09-28 16:19:01.583597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:47.006 request: 00:17:47.006 { 00:17:47.006 "name": "raid_bdev1", 00:17:47.006 "raid_level": "raid1", 00:17:47.006 "base_bdevs": [ 00:17:47.006 "malloc1", 00:17:47.006 "malloc2" 00:17:47.006 ], 00:17:47.006 "superblock": false, 00:17:47.006 "method": "bdev_raid_create", 00:17:47.006 "req_id": 1 00:17:47.006 } 00:17:47.006 Got JSON-RPC error response 00:17:47.006 response: 00:17:47.006 { 00:17:47.006 "code": -17, 00:17:47.006 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:47.006 } 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 [2024-09-28 16:19:01.645249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.006 [2024-09-28 16:19:01.645340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.006 [2024-09-28 16:19:01.645389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:47.006 [2024-09-28 16:19:01.645419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.006 [2024-09-28 16:19:01.647825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.006 [2024-09-28 16:19:01.647900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.006 [2024-09-28 16:19:01.648014] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:47.006 [2024-09-28 16:19:01.648097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.006 pt1 00:17:47.006 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.007 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.265 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.265 "name": "raid_bdev1", 00:17:47.265 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:47.265 "strip_size_kb": 0, 00:17:47.265 "state": "configuring", 00:17:47.265 "raid_level": "raid1", 00:17:47.265 "superblock": true, 00:17:47.265 "num_base_bdevs": 2, 00:17:47.265 "num_base_bdevs_discovered": 1, 00:17:47.265 "num_base_bdevs_operational": 2, 00:17:47.265 "base_bdevs_list": [ 00:17:47.265 { 00:17:47.265 "name": "pt1", 00:17:47.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.265 "is_configured": true, 00:17:47.265 "data_offset": 256, 00:17:47.265 "data_size": 7936 00:17:47.265 }, 00:17:47.265 { 00:17:47.265 "name": null, 00:17:47.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.265 "is_configured": false, 00:17:47.265 "data_offset": 256, 00:17:47.265 "data_size": 7936 00:17:47.265 } 00:17:47.265 ] 00:17:47.265 }' 00:17:47.265 16:19:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.265 16:19:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.524 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:47.524 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:47.524 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:47.524 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.524 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.524 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.524 [2024-09-28 16:19:02.116404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.524 [2024-09-28 16:19:02.116506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.524 [2024-09-28 16:19:02.116530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:47.525 [2024-09-28 16:19:02.116541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.525 [2024-09-28 16:19:02.116984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.525 [2024-09-28 16:19:02.117005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.525 [2024-09-28 16:19:02.117066] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:47.525 [2024-09-28 16:19:02.117087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.525 [2024-09-28 16:19:02.117199] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:47.525 [2024-09-28 16:19:02.117210] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:47.525 [2024-09-28 16:19:02.117470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:47.525 [2024-09-28 16:19:02.117627] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:47.525 [2024-09-28 16:19:02.117644] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:47.525 [2024-09-28 16:19:02.117772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.525 pt2 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.525 "name": "raid_bdev1", 00:17:47.525 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:47.525 "strip_size_kb": 0, 00:17:47.525 "state": "online", 00:17:47.525 "raid_level": "raid1", 00:17:47.525 "superblock": true, 00:17:47.525 "num_base_bdevs": 2, 00:17:47.525 "num_base_bdevs_discovered": 2, 00:17:47.525 "num_base_bdevs_operational": 2, 00:17:47.525 "base_bdevs_list": [ 00:17:47.525 { 00:17:47.525 "name": "pt1", 00:17:47.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.525 "is_configured": true, 00:17:47.525 "data_offset": 256, 00:17:47.525 "data_size": 7936 00:17:47.525 }, 00:17:47.525 { 00:17:47.525 "name": "pt2", 00:17:47.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.525 "is_configured": true, 00:17:47.525 "data_offset": 256, 00:17:47.525 "data_size": 7936 00:17:47.525 } 00:17:47.525 ] 00:17:47.525 }' 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.525 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.093 [2024-09-28 16:19:02.591898] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.093 "name": "raid_bdev1", 00:17:48.093 "aliases": [ 00:17:48.093 "44c7bd85-acbe-4b70-b38b-1559ba06dbcb" 00:17:48.093 ], 00:17:48.093 "product_name": "Raid Volume", 00:17:48.093 "block_size": 4096, 00:17:48.093 "num_blocks": 7936, 00:17:48.093 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:48.093 "assigned_rate_limits": { 00:17:48.093 "rw_ios_per_sec": 0, 00:17:48.093 "rw_mbytes_per_sec": 0, 00:17:48.093 "r_mbytes_per_sec": 0, 00:17:48.093 "w_mbytes_per_sec": 0 00:17:48.093 }, 00:17:48.093 "claimed": false, 00:17:48.093 "zoned": false, 00:17:48.093 "supported_io_types": { 00:17:48.093 "read": true, 00:17:48.093 "write": true, 00:17:48.093 "unmap": false, 00:17:48.093 "flush": false, 00:17:48.093 "reset": true, 00:17:48.093 "nvme_admin": false, 00:17:48.093 "nvme_io": false, 00:17:48.093 "nvme_io_md": false, 00:17:48.093 "write_zeroes": true, 00:17:48.093 "zcopy": false, 00:17:48.093 "get_zone_info": false, 00:17:48.093 "zone_management": false, 00:17:48.093 "zone_append": false, 00:17:48.093 "compare": false, 00:17:48.093 "compare_and_write": false, 00:17:48.093 "abort": false, 00:17:48.093 "seek_hole": false, 00:17:48.093 "seek_data": false, 00:17:48.093 "copy": false, 00:17:48.093 "nvme_iov_md": false 00:17:48.093 }, 00:17:48.093 "memory_domains": [ 00:17:48.093 { 00:17:48.093 "dma_device_id": "system", 00:17:48.093 "dma_device_type": 1 00:17:48.093 }, 00:17:48.093 { 00:17:48.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.093 "dma_device_type": 2 00:17:48.093 }, 00:17:48.093 { 00:17:48.093 "dma_device_id": "system", 00:17:48.093 "dma_device_type": 1 00:17:48.093 }, 00:17:48.093 { 00:17:48.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.093 "dma_device_type": 2 00:17:48.093 } 00:17:48.093 ], 00:17:48.093 "driver_specific": { 00:17:48.093 "raid": { 00:17:48.093 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:48.093 "strip_size_kb": 0, 00:17:48.093 "state": "online", 00:17:48.093 "raid_level": "raid1", 00:17:48.093 "superblock": true, 00:17:48.093 "num_base_bdevs": 2, 00:17:48.093 "num_base_bdevs_discovered": 2, 00:17:48.093 "num_base_bdevs_operational": 2, 00:17:48.093 "base_bdevs_list": [ 00:17:48.093 { 00:17:48.093 "name": "pt1", 00:17:48.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.093 "is_configured": true, 00:17:48.093 "data_offset": 256, 00:17:48.093 "data_size": 7936 00:17:48.093 }, 00:17:48.093 { 00:17:48.093 "name": "pt2", 00:17:48.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.093 "is_configured": true, 00:17:48.093 "data_offset": 256, 00:17:48.093 "data_size": 7936 00:17:48.093 } 00:17:48.093 ] 00:17:48.093 } 00:17:48.093 } 00:17:48.093 }' 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:48.093 pt2' 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.093 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.352 [2024-09-28 16:19:02.843487] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 44c7bd85-acbe-4b70-b38b-1559ba06dbcb '!=' 44c7bd85-acbe-4b70-b38b-1559ba06dbcb ']' 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.352 [2024-09-28 16:19:02.875276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.352 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.353 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.353 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.353 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.353 "name": "raid_bdev1", 00:17:48.353 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:48.353 "strip_size_kb": 0, 00:17:48.353 "state": "online", 00:17:48.353 "raid_level": "raid1", 00:17:48.353 "superblock": true, 00:17:48.353 "num_base_bdevs": 2, 00:17:48.353 "num_base_bdevs_discovered": 1, 00:17:48.353 "num_base_bdevs_operational": 1, 00:17:48.353 "base_bdevs_list": [ 00:17:48.353 { 00:17:48.353 "name": null, 00:17:48.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.353 "is_configured": false, 00:17:48.353 "data_offset": 0, 00:17:48.353 "data_size": 7936 00:17:48.353 }, 00:17:48.353 { 00:17:48.353 "name": "pt2", 00:17:48.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.353 "is_configured": true, 00:17:48.353 "data_offset": 256, 00:17:48.353 "data_size": 7936 00:17:48.353 } 00:17:48.353 ] 00:17:48.353 }' 00:17:48.353 16:19:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.353 16:19:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 [2024-09-28 16:19:03.338401] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.921 [2024-09-28 16:19:03.338470] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.921 [2024-09-28 16:19:03.338563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.921 [2024-09-28 16:19:03.338619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.921 [2024-09-28 16:19:03.338663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 [2024-09-28 16:19:03.406302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.921 [2024-09-28 16:19:03.406391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.921 [2024-09-28 16:19:03.406439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:48.921 [2024-09-28 16:19:03.406478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.921 [2024-09-28 16:19:03.408952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.921 [2024-09-28 16:19:03.409025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.921 [2024-09-28 16:19:03.409118] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:48.921 [2024-09-28 16:19:03.409207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.921 [2024-09-28 16:19:03.409340] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:48.921 [2024-09-28 16:19:03.409383] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:48.921 [2024-09-28 16:19:03.409629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:48.921 [2024-09-28 16:19:03.409832] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:48.921 [2024-09-28 16:19:03.409872] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:48.921 [2024-09-28 16:19:03.410044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.921 pt2 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.921 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.921 "name": "raid_bdev1", 00:17:48.921 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:48.921 "strip_size_kb": 0, 00:17:48.921 "state": "online", 00:17:48.921 "raid_level": "raid1", 00:17:48.921 "superblock": true, 00:17:48.921 "num_base_bdevs": 2, 00:17:48.921 "num_base_bdevs_discovered": 1, 00:17:48.921 "num_base_bdevs_operational": 1, 00:17:48.921 "base_bdevs_list": [ 00:17:48.921 { 00:17:48.921 "name": null, 00:17:48.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.921 "is_configured": false, 00:17:48.921 "data_offset": 256, 00:17:48.922 "data_size": 7936 00:17:48.922 }, 00:17:48.922 { 00:17:48.922 "name": "pt2", 00:17:48.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.922 "is_configured": true, 00:17:48.922 "data_offset": 256, 00:17:48.922 "data_size": 7936 00:17:48.922 } 00:17:48.922 ] 00:17:48.922 }' 00:17:48.922 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.922 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.181 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:49.181 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.181 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.181 [2024-09-28 16:19:03.861458] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.181 [2024-09-28 16:19:03.861530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.181 [2024-09-28 16:19:03.861618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.181 [2024-09-28 16:19:03.861676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.181 [2024-09-28 16:19:03.861738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.441 [2024-09-28 16:19:03.921375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:49.441 [2024-09-28 16:19:03.921481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.441 [2024-09-28 16:19:03.921514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:49.441 [2024-09-28 16:19:03.921542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.441 [2024-09-28 16:19:03.923978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.441 [2024-09-28 16:19:03.924053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:49.441 [2024-09-28 16:19:03.924144] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:49.441 [2024-09-28 16:19:03.924206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:49.441 [2024-09-28 16:19:03.924384] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:49.441 [2024-09-28 16:19:03.924436] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.441 [2024-09-28 16:19:03.924474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:49.441 [2024-09-28 16:19:03.924540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.441 [2024-09-28 16:19:03.924621] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:49.441 [2024-09-28 16:19:03.924629] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:49.441 [2024-09-28 16:19:03.924857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:49.441 [2024-09-28 16:19:03.925009] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:49.441 [2024-09-28 16:19:03.925022] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:49.441 [2024-09-28 16:19:03.925155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.441 pt1 00:17:49.441 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.442 "name": "raid_bdev1", 00:17:49.442 "uuid": "44c7bd85-acbe-4b70-b38b-1559ba06dbcb", 00:17:49.442 "strip_size_kb": 0, 00:17:49.442 "state": "online", 00:17:49.442 "raid_level": "raid1", 00:17:49.442 "superblock": true, 00:17:49.442 "num_base_bdevs": 2, 00:17:49.442 "num_base_bdevs_discovered": 1, 00:17:49.442 "num_base_bdevs_operational": 1, 00:17:49.442 "base_bdevs_list": [ 00:17:49.442 { 00:17:49.442 "name": null, 00:17:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.442 "is_configured": false, 00:17:49.442 "data_offset": 256, 00:17:49.442 "data_size": 7936 00:17:49.442 }, 00:17:49.442 { 00:17:49.442 "name": "pt2", 00:17:49.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.442 "is_configured": true, 00:17:49.442 "data_offset": 256, 00:17:49.442 "data_size": 7936 00:17:49.442 } 00:17:49.442 ] 00:17:49.442 }' 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.442 16:19:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.701 16:19:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:49.701 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.701 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.701 16:19:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:49.961 [2024-09-28 16:19:04.440699] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 44c7bd85-acbe-4b70-b38b-1559ba06dbcb '!=' 44c7bd85-acbe-4b70-b38b-1559ba06dbcb ']' 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86190 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86190 ']' 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86190 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86190 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.961 killing process with pid 86190 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86190' 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86190 00:17:49.961 [2024-09-28 16:19:04.527102] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.961 [2024-09-28 16:19:04.527170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.961 [2024-09-28 16:19:04.527208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.961 [2024-09-28 16:19:04.527236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:49.961 16:19:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86190 00:17:50.220 [2024-09-28 16:19:04.744528] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.601 16:19:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:51.601 ************************************ 00:17:51.601 END TEST raid_superblock_test_4k 00:17:51.601 ************************************ 00:17:51.601 00:17:51.601 real 0m6.450s 00:17:51.601 user 0m9.476s 00:17:51.601 sys 0m1.324s 00:17:51.601 16:19:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.601 16:19:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.601 16:19:06 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:51.601 16:19:06 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:51.601 16:19:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:51.601 16:19:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.601 16:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.601 ************************************ 00:17:51.601 START TEST raid_rebuild_test_sb_4k 00:17:51.601 ************************************ 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86518 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86518 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86518 ']' 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.601 16:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.602 [2024-09-28 16:19:06.248106] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:51.602 [2024-09-28 16:19:06.248332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86518 ] 00:17:51.602 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:51.602 Zero copy mechanism will not be used. 00:17:51.862 [2024-09-28 16:19:06.416399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.121 [2024-09-28 16:19:06.657208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.381 [2024-09-28 16:19:06.891938] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.381 [2024-09-28 16:19:06.892068] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.381 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.381 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:52.381 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:52.381 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:52.381 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.381 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.641 BaseBdev1_malloc 00:17:52.641 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.641 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 [2024-09-28 16:19:07.099493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:52.642 [2024-09-28 16:19:07.099641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.642 [2024-09-28 16:19:07.099684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:52.642 [2024-09-28 16:19:07.099723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.642 [2024-09-28 16:19:07.102150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.642 [2024-09-28 16:19:07.102241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:52.642 BaseBdev1 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 BaseBdev2_malloc 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 [2024-09-28 16:19:07.174627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:52.642 [2024-09-28 16:19:07.174707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.642 [2024-09-28 16:19:07.174727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:52.642 [2024-09-28 16:19:07.174740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.642 [2024-09-28 16:19:07.177153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.642 [2024-09-28 16:19:07.177233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:52.642 BaseBdev2 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 spare_malloc 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 spare_delay 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 [2024-09-28 16:19:07.244884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:52.642 [2024-09-28 16:19:07.244961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.642 [2024-09-28 16:19:07.244983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:52.642 [2024-09-28 16:19:07.244994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.642 [2024-09-28 16:19:07.247407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.642 [2024-09-28 16:19:07.247499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:52.642 spare 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 [2024-09-28 16:19:07.256915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.642 [2024-09-28 16:19:07.258984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.642 [2024-09-28 16:19:07.259249] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:52.642 [2024-09-28 16:19:07.259269] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:52.642 [2024-09-28 16:19:07.259545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:52.642 [2024-09-28 16:19:07.259727] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:52.642 [2024-09-28 16:19:07.259736] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:52.642 [2024-09-28 16:19:07.259872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.642 "name": "raid_bdev1", 00:17:52.642 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:52.642 "strip_size_kb": 0, 00:17:52.642 "state": "online", 00:17:52.642 "raid_level": "raid1", 00:17:52.642 "superblock": true, 00:17:52.642 "num_base_bdevs": 2, 00:17:52.642 "num_base_bdevs_discovered": 2, 00:17:52.642 "num_base_bdevs_operational": 2, 00:17:52.642 "base_bdevs_list": [ 00:17:52.642 { 00:17:52.642 "name": "BaseBdev1", 00:17:52.642 "uuid": "29ba818c-83bd-5ef3-84f3-2960874af17e", 00:17:52.642 "is_configured": true, 00:17:52.642 "data_offset": 256, 00:17:52.642 "data_size": 7936 00:17:52.642 }, 00:17:52.642 { 00:17:52.642 "name": "BaseBdev2", 00:17:52.642 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:52.642 "is_configured": true, 00:17:52.642 "data_offset": 256, 00:17:52.642 "data_size": 7936 00:17:52.642 } 00:17:52.642 ] 00:17:52.642 }' 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.642 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.212 [2024-09-28 16:19:07.732339] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:53.212 16:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:53.472 [2024-09-28 16:19:07.995711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:53.472 /dev/nbd0 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:53.472 1+0 records in 00:17:53.472 1+0 records out 00:17:53.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311925 s, 13.1 MB/s 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:53.472 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:54.042 7936+0 records in 00:17:54.042 7936+0 records out 00:17:54.042 32505856 bytes (33 MB, 31 MiB) copied, 0.624172 s, 52.1 MB/s 00:17:54.042 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:54.042 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.042 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:54.042 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.042 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:54.042 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.042 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:54.302 [2024-09-28 16:19:08.901470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.302 [2024-09-28 16:19:08.918404] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.302 "name": "raid_bdev1", 00:17:54.302 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:54.302 "strip_size_kb": 0, 00:17:54.302 "state": "online", 00:17:54.302 "raid_level": "raid1", 00:17:54.302 "superblock": true, 00:17:54.302 "num_base_bdevs": 2, 00:17:54.302 "num_base_bdevs_discovered": 1, 00:17:54.302 "num_base_bdevs_operational": 1, 00:17:54.302 "base_bdevs_list": [ 00:17:54.302 { 00:17:54.302 "name": null, 00:17:54.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.302 "is_configured": false, 00:17:54.302 "data_offset": 0, 00:17:54.302 "data_size": 7936 00:17:54.302 }, 00:17:54.302 { 00:17:54.302 "name": "BaseBdev2", 00:17:54.302 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:54.302 "is_configured": true, 00:17:54.302 "data_offset": 256, 00:17:54.302 "data_size": 7936 00:17:54.302 } 00:17:54.302 ] 00:17:54.302 }' 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.302 16:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.873 16:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.873 16:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.873 16:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.873 [2024-09-28 16:19:09.389567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.873 [2024-09-28 16:19:09.405776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:54.873 16:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.873 16:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:54.873 [2024-09-28 16:19:09.407958] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.812 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.812 "name": "raid_bdev1", 00:17:55.812 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:55.812 "strip_size_kb": 0, 00:17:55.812 "state": "online", 00:17:55.812 "raid_level": "raid1", 00:17:55.812 "superblock": true, 00:17:55.812 "num_base_bdevs": 2, 00:17:55.812 "num_base_bdevs_discovered": 2, 00:17:55.812 "num_base_bdevs_operational": 2, 00:17:55.812 "process": { 00:17:55.812 "type": "rebuild", 00:17:55.812 "target": "spare", 00:17:55.812 "progress": { 00:17:55.812 "blocks": 2560, 00:17:55.812 "percent": 32 00:17:55.813 } 00:17:55.813 }, 00:17:55.813 "base_bdevs_list": [ 00:17:55.813 { 00:17:55.813 "name": "spare", 00:17:55.813 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:17:55.813 "is_configured": true, 00:17:55.813 "data_offset": 256, 00:17:55.813 "data_size": 7936 00:17:55.813 }, 00:17:55.813 { 00:17:55.813 "name": "BaseBdev2", 00:17:55.813 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:55.813 "is_configured": true, 00:17:55.813 "data_offset": 256, 00:17:55.813 "data_size": 7936 00:17:55.813 } 00:17:55.813 ] 00:17:55.813 }' 00:17:55.813 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.073 [2024-09-28 16:19:10.571007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.073 [2024-09-28 16:19:10.616641] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.073 [2024-09-28 16:19:10.616706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.073 [2024-09-28 16:19:10.616721] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.073 [2024-09-28 16:19:10.616731] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.073 "name": "raid_bdev1", 00:17:56.073 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:56.073 "strip_size_kb": 0, 00:17:56.073 "state": "online", 00:17:56.073 "raid_level": "raid1", 00:17:56.073 "superblock": true, 00:17:56.073 "num_base_bdevs": 2, 00:17:56.073 "num_base_bdevs_discovered": 1, 00:17:56.073 "num_base_bdevs_operational": 1, 00:17:56.073 "base_bdevs_list": [ 00:17:56.073 { 00:17:56.073 "name": null, 00:17:56.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.073 "is_configured": false, 00:17:56.073 "data_offset": 0, 00:17:56.073 "data_size": 7936 00:17:56.073 }, 00:17:56.073 { 00:17:56.073 "name": "BaseBdev2", 00:17:56.073 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:56.073 "is_configured": true, 00:17:56.073 "data_offset": 256, 00:17:56.073 "data_size": 7936 00:17:56.073 } 00:17:56.073 ] 00:17:56.073 }' 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.073 16:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.644 "name": "raid_bdev1", 00:17:56.644 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:56.644 "strip_size_kb": 0, 00:17:56.644 "state": "online", 00:17:56.644 "raid_level": "raid1", 00:17:56.644 "superblock": true, 00:17:56.644 "num_base_bdevs": 2, 00:17:56.644 "num_base_bdevs_discovered": 1, 00:17:56.644 "num_base_bdevs_operational": 1, 00:17:56.644 "base_bdevs_list": [ 00:17:56.644 { 00:17:56.644 "name": null, 00:17:56.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.644 "is_configured": false, 00:17:56.644 "data_offset": 0, 00:17:56.644 "data_size": 7936 00:17:56.644 }, 00:17:56.644 { 00:17:56.644 "name": "BaseBdev2", 00:17:56.644 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:56.644 "is_configured": true, 00:17:56.644 "data_offset": 256, 00:17:56.644 "data_size": 7936 00:17:56.644 } 00:17:56.644 ] 00:17:56.644 }' 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.644 [2024-09-28 16:19:11.256585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.644 [2024-09-28 16:19:11.271901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.644 16:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:56.644 [2024-09-28 16:19:11.274079] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.024 "name": "raid_bdev1", 00:17:58.024 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:58.024 "strip_size_kb": 0, 00:17:58.024 "state": "online", 00:17:58.024 "raid_level": "raid1", 00:17:58.024 "superblock": true, 00:17:58.024 "num_base_bdevs": 2, 00:17:58.024 "num_base_bdevs_discovered": 2, 00:17:58.024 "num_base_bdevs_operational": 2, 00:17:58.024 "process": { 00:17:58.024 "type": "rebuild", 00:17:58.024 "target": "spare", 00:17:58.024 "progress": { 00:17:58.024 "blocks": 2560, 00:17:58.024 "percent": 32 00:17:58.024 } 00:17:58.024 }, 00:17:58.024 "base_bdevs_list": [ 00:17:58.024 { 00:17:58.024 "name": "spare", 00:17:58.024 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:17:58.024 "is_configured": true, 00:17:58.024 "data_offset": 256, 00:17:58.024 "data_size": 7936 00:17:58.024 }, 00:17:58.024 { 00:17:58.024 "name": "BaseBdev2", 00:17:58.024 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:58.024 "is_configured": true, 00:17:58.024 "data_offset": 256, 00:17:58.024 "data_size": 7936 00:17:58.024 } 00:17:58.024 ] 00:17:58.024 }' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:58.024 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=685 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.024 "name": "raid_bdev1", 00:17:58.024 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:58.024 "strip_size_kb": 0, 00:17:58.024 "state": "online", 00:17:58.024 "raid_level": "raid1", 00:17:58.024 "superblock": true, 00:17:58.024 "num_base_bdevs": 2, 00:17:58.024 "num_base_bdevs_discovered": 2, 00:17:58.024 "num_base_bdevs_operational": 2, 00:17:58.024 "process": { 00:17:58.024 "type": "rebuild", 00:17:58.024 "target": "spare", 00:17:58.024 "progress": { 00:17:58.024 "blocks": 2816, 00:17:58.024 "percent": 35 00:17:58.024 } 00:17:58.024 }, 00:17:58.024 "base_bdevs_list": [ 00:17:58.024 { 00:17:58.024 "name": "spare", 00:17:58.024 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:17:58.024 "is_configured": true, 00:17:58.024 "data_offset": 256, 00:17:58.024 "data_size": 7936 00:17:58.024 }, 00:17:58.024 { 00:17:58.024 "name": "BaseBdev2", 00:17:58.024 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:58.024 "is_configured": true, 00:17:58.024 "data_offset": 256, 00:17:58.024 "data_size": 7936 00:17:58.024 } 00:17:58.024 ] 00:17:58.024 }' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.024 16:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.965 "name": "raid_bdev1", 00:17:58.965 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:17:58.965 "strip_size_kb": 0, 00:17:58.965 "state": "online", 00:17:58.965 "raid_level": "raid1", 00:17:58.965 "superblock": true, 00:17:58.965 "num_base_bdevs": 2, 00:17:58.965 "num_base_bdevs_discovered": 2, 00:17:58.965 "num_base_bdevs_operational": 2, 00:17:58.965 "process": { 00:17:58.965 "type": "rebuild", 00:17:58.965 "target": "spare", 00:17:58.965 "progress": { 00:17:58.965 "blocks": 5888, 00:17:58.965 "percent": 74 00:17:58.965 } 00:17:58.965 }, 00:17:58.965 "base_bdevs_list": [ 00:17:58.965 { 00:17:58.965 "name": "spare", 00:17:58.965 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:17:58.965 "is_configured": true, 00:17:58.965 "data_offset": 256, 00:17:58.965 "data_size": 7936 00:17:58.965 }, 00:17:58.965 { 00:17:58.965 "name": "BaseBdev2", 00:17:58.965 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:17:58.965 "is_configured": true, 00:17:58.965 "data_offset": 256, 00:17:58.965 "data_size": 7936 00:17:58.965 } 00:17:58.965 ] 00:17:58.965 }' 00:17:58.965 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.225 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.225 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.225 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.225 16:19:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.796 [2024-09-28 16:19:14.395808] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:59.796 [2024-09-28 16:19:14.395891] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:59.796 [2024-09-28 16:19:14.395995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.367 "name": "raid_bdev1", 00:18:00.367 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:00.367 "strip_size_kb": 0, 00:18:00.367 "state": "online", 00:18:00.367 "raid_level": "raid1", 00:18:00.367 "superblock": true, 00:18:00.367 "num_base_bdevs": 2, 00:18:00.367 "num_base_bdevs_discovered": 2, 00:18:00.367 "num_base_bdevs_operational": 2, 00:18:00.367 "base_bdevs_list": [ 00:18:00.367 { 00:18:00.367 "name": "spare", 00:18:00.367 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:18:00.367 "is_configured": true, 00:18:00.367 "data_offset": 256, 00:18:00.367 "data_size": 7936 00:18:00.367 }, 00:18:00.367 { 00:18:00.367 "name": "BaseBdev2", 00:18:00.367 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:00.367 "is_configured": true, 00:18:00.367 "data_offset": 256, 00:18:00.367 "data_size": 7936 00:18:00.367 } 00:18:00.367 ] 00:18:00.367 }' 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.367 "name": "raid_bdev1", 00:18:00.367 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:00.367 "strip_size_kb": 0, 00:18:00.367 "state": "online", 00:18:00.367 "raid_level": "raid1", 00:18:00.367 "superblock": true, 00:18:00.367 "num_base_bdevs": 2, 00:18:00.367 "num_base_bdevs_discovered": 2, 00:18:00.367 "num_base_bdevs_operational": 2, 00:18:00.367 "base_bdevs_list": [ 00:18:00.367 { 00:18:00.367 "name": "spare", 00:18:00.367 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:18:00.367 "is_configured": true, 00:18:00.367 "data_offset": 256, 00:18:00.367 "data_size": 7936 00:18:00.367 }, 00:18:00.367 { 00:18:00.367 "name": "BaseBdev2", 00:18:00.367 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:00.367 "is_configured": true, 00:18:00.367 "data_offset": 256, 00:18:00.367 "data_size": 7936 00:18:00.367 } 00:18:00.367 ] 00:18:00.367 }' 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.367 16:19:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.367 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.628 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.628 "name": "raid_bdev1", 00:18:00.628 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:00.628 "strip_size_kb": 0, 00:18:00.628 "state": "online", 00:18:00.628 "raid_level": "raid1", 00:18:00.628 "superblock": true, 00:18:00.628 "num_base_bdevs": 2, 00:18:00.628 "num_base_bdevs_discovered": 2, 00:18:00.628 "num_base_bdevs_operational": 2, 00:18:00.628 "base_bdevs_list": [ 00:18:00.628 { 00:18:00.628 "name": "spare", 00:18:00.628 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:18:00.628 "is_configured": true, 00:18:00.628 "data_offset": 256, 00:18:00.628 "data_size": 7936 00:18:00.628 }, 00:18:00.628 { 00:18:00.628 "name": "BaseBdev2", 00:18:00.628 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:00.628 "is_configured": true, 00:18:00.628 "data_offset": 256, 00:18:00.628 "data_size": 7936 00:18:00.628 } 00:18:00.628 ] 00:18:00.628 }' 00:18:00.628 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.628 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.888 [2024-09-28 16:19:15.402081] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.888 [2024-09-28 16:19:15.402161] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.888 [2024-09-28 16:19:15.402281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.888 [2024-09-28 16:19:15.402401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.888 [2024-09-28 16:19:15.402451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:00.888 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.889 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:00.889 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.889 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:00.889 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:01.149 /dev/nbd0 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.149 1+0 records in 00:18:01.149 1+0 records out 00:18:01.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023609 s, 17.3 MB/s 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.149 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:01.407 /dev/nbd1 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.407 1+0 records in 00:18:01.407 1+0 records out 00:18:01.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355489 s, 11.5 MB/s 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.407 16:19:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.667 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:01.927 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.928 [2024-09-28 16:19:16.561919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.928 [2024-09-28 16:19:16.561981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.928 [2024-09-28 16:19:16.562002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:01.928 [2024-09-28 16:19:16.562011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.928 [2024-09-28 16:19:16.564065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.928 [2024-09-28 16:19:16.564109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.928 [2024-09-28 16:19:16.564191] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:01.928 [2024-09-28 16:19:16.564253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.928 [2024-09-28 16:19:16.564408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.928 spare 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.928 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.188 [2024-09-28 16:19:16.664318] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:02.188 [2024-09-28 16:19:16.664347] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.188 [2024-09-28 16:19:16.664589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:02.188 [2024-09-28 16:19:16.664741] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:02.188 [2024-09-28 16:19:16.664751] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:02.188 [2024-09-28 16:19:16.664899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.188 "name": "raid_bdev1", 00:18:02.188 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:02.188 "strip_size_kb": 0, 00:18:02.188 "state": "online", 00:18:02.188 "raid_level": "raid1", 00:18:02.188 "superblock": true, 00:18:02.188 "num_base_bdevs": 2, 00:18:02.188 "num_base_bdevs_discovered": 2, 00:18:02.188 "num_base_bdevs_operational": 2, 00:18:02.188 "base_bdevs_list": [ 00:18:02.188 { 00:18:02.188 "name": "spare", 00:18:02.188 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:18:02.188 "is_configured": true, 00:18:02.188 "data_offset": 256, 00:18:02.188 "data_size": 7936 00:18:02.188 }, 00:18:02.188 { 00:18:02.188 "name": "BaseBdev2", 00:18:02.188 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:02.188 "is_configured": true, 00:18:02.188 "data_offset": 256, 00:18:02.188 "data_size": 7936 00:18:02.188 } 00:18:02.188 ] 00:18:02.188 }' 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.188 16:19:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.448 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.709 "name": "raid_bdev1", 00:18:02.709 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:02.709 "strip_size_kb": 0, 00:18:02.709 "state": "online", 00:18:02.709 "raid_level": "raid1", 00:18:02.709 "superblock": true, 00:18:02.709 "num_base_bdevs": 2, 00:18:02.709 "num_base_bdevs_discovered": 2, 00:18:02.709 "num_base_bdevs_operational": 2, 00:18:02.709 "base_bdevs_list": [ 00:18:02.709 { 00:18:02.709 "name": "spare", 00:18:02.709 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:18:02.709 "is_configured": true, 00:18:02.709 "data_offset": 256, 00:18:02.709 "data_size": 7936 00:18:02.709 }, 00:18:02.709 { 00:18:02.709 "name": "BaseBdev2", 00:18:02.709 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:02.709 "is_configured": true, 00:18:02.709 "data_offset": 256, 00:18:02.709 "data_size": 7936 00:18:02.709 } 00:18:02.709 ] 00:18:02.709 }' 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.709 [2024-09-28 16:19:17.292734] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.709 "name": "raid_bdev1", 00:18:02.709 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:02.709 "strip_size_kb": 0, 00:18:02.709 "state": "online", 00:18:02.709 "raid_level": "raid1", 00:18:02.709 "superblock": true, 00:18:02.709 "num_base_bdevs": 2, 00:18:02.709 "num_base_bdevs_discovered": 1, 00:18:02.709 "num_base_bdevs_operational": 1, 00:18:02.709 "base_bdevs_list": [ 00:18:02.709 { 00:18:02.709 "name": null, 00:18:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.709 "is_configured": false, 00:18:02.709 "data_offset": 0, 00:18:02.709 "data_size": 7936 00:18:02.709 }, 00:18:02.709 { 00:18:02.709 "name": "BaseBdev2", 00:18:02.709 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:02.709 "is_configured": true, 00:18:02.709 "data_offset": 256, 00:18:02.709 "data_size": 7936 00:18:02.709 } 00:18:02.709 ] 00:18:02.709 }' 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.709 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.281 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.281 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.281 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.281 [2024-09-28 16:19:17.696098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.281 [2024-09-28 16:19:17.696343] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.281 [2024-09-28 16:19:17.696409] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:03.281 [2024-09-28 16:19:17.696485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.281 [2024-09-28 16:19:17.710882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:03.281 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.281 16:19:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:03.281 [2024-09-28 16:19:17.712679] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.220 "name": "raid_bdev1", 00:18:04.220 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:04.220 "strip_size_kb": 0, 00:18:04.220 "state": "online", 00:18:04.220 "raid_level": "raid1", 00:18:04.220 "superblock": true, 00:18:04.220 "num_base_bdevs": 2, 00:18:04.220 "num_base_bdevs_discovered": 2, 00:18:04.220 "num_base_bdevs_operational": 2, 00:18:04.220 "process": { 00:18:04.220 "type": "rebuild", 00:18:04.220 "target": "spare", 00:18:04.220 "progress": { 00:18:04.220 "blocks": 2560, 00:18:04.220 "percent": 32 00:18:04.220 } 00:18:04.220 }, 00:18:04.220 "base_bdevs_list": [ 00:18:04.220 { 00:18:04.220 "name": "spare", 00:18:04.220 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:18:04.220 "is_configured": true, 00:18:04.220 "data_offset": 256, 00:18:04.220 "data_size": 7936 00:18:04.220 }, 00:18:04.220 { 00:18:04.220 "name": "BaseBdev2", 00:18:04.220 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:04.220 "is_configured": true, 00:18:04.220 "data_offset": 256, 00:18:04.220 "data_size": 7936 00:18:04.220 } 00:18:04.220 ] 00:18:04.220 }' 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.220 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.221 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.221 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.221 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.221 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.221 [2024-09-28 16:19:18.864102] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.480 [2024-09-28 16:19:18.918324] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.480 [2024-09-28 16:19:18.918430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.480 [2024-09-28 16:19:18.918462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.480 [2024-09-28 16:19:18.918484] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.480 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.481 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.481 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.481 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.481 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.481 "name": "raid_bdev1", 00:18:04.481 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:04.481 "strip_size_kb": 0, 00:18:04.481 "state": "online", 00:18:04.481 "raid_level": "raid1", 00:18:04.481 "superblock": true, 00:18:04.481 "num_base_bdevs": 2, 00:18:04.481 "num_base_bdevs_discovered": 1, 00:18:04.481 "num_base_bdevs_operational": 1, 00:18:04.481 "base_bdevs_list": [ 00:18:04.481 { 00:18:04.481 "name": null, 00:18:04.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.481 "is_configured": false, 00:18:04.481 "data_offset": 0, 00:18:04.481 "data_size": 7936 00:18:04.481 }, 00:18:04.481 { 00:18:04.481 "name": "BaseBdev2", 00:18:04.481 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:04.481 "is_configured": true, 00:18:04.481 "data_offset": 256, 00:18:04.481 "data_size": 7936 00:18:04.481 } 00:18:04.481 ] 00:18:04.481 }' 00:18:04.481 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.481 16:19:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.740 16:19:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:04.741 16:19:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.741 16:19:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.741 [2024-09-28 16:19:19.384219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.741 [2024-09-28 16:19:19.384288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.741 [2024-09-28 16:19:19.384309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:04.741 [2024-09-28 16:19:19.384320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.741 [2024-09-28 16:19:19.384783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.741 [2024-09-28 16:19:19.384810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.741 [2024-09-28 16:19:19.384887] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:04.741 [2024-09-28 16:19:19.384900] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.741 [2024-09-28 16:19:19.384910] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:04.741 [2024-09-28 16:19:19.384930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.741 [2024-09-28 16:19:19.399515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:04.741 spare 00:18:04.741 16:19:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.741 16:19:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:04.741 [2024-09-28 16:19:19.401216] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.124 "name": "raid_bdev1", 00:18:06.124 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:06.124 "strip_size_kb": 0, 00:18:06.124 "state": "online", 00:18:06.124 "raid_level": "raid1", 00:18:06.124 "superblock": true, 00:18:06.124 "num_base_bdevs": 2, 00:18:06.124 "num_base_bdevs_discovered": 2, 00:18:06.124 "num_base_bdevs_operational": 2, 00:18:06.124 "process": { 00:18:06.124 "type": "rebuild", 00:18:06.124 "target": "spare", 00:18:06.124 "progress": { 00:18:06.124 "blocks": 2560, 00:18:06.124 "percent": 32 00:18:06.124 } 00:18:06.124 }, 00:18:06.124 "base_bdevs_list": [ 00:18:06.124 { 00:18:06.124 "name": "spare", 00:18:06.124 "uuid": "2c5f63e3-37e7-5376-9463-d12adc5c7ede", 00:18:06.124 "is_configured": true, 00:18:06.124 "data_offset": 256, 00:18:06.124 "data_size": 7936 00:18:06.124 }, 00:18:06.124 { 00:18:06.124 "name": "BaseBdev2", 00:18:06.124 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:06.124 "is_configured": true, 00:18:06.124 "data_offset": 256, 00:18:06.124 "data_size": 7936 00:18:06.124 } 00:18:06.124 ] 00:18:06.124 }' 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.124 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 [2024-09-28 16:19:20.569664] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.125 [2024-09-28 16:19:20.606036] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.125 [2024-09-28 16:19:20.606135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.125 [2024-09-28 16:19:20.606169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.125 [2024-09-28 16:19:20.606189] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.125 "name": "raid_bdev1", 00:18:06.125 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:06.125 "strip_size_kb": 0, 00:18:06.125 "state": "online", 00:18:06.125 "raid_level": "raid1", 00:18:06.125 "superblock": true, 00:18:06.125 "num_base_bdevs": 2, 00:18:06.125 "num_base_bdevs_discovered": 1, 00:18:06.125 "num_base_bdevs_operational": 1, 00:18:06.125 "base_bdevs_list": [ 00:18:06.125 { 00:18:06.125 "name": null, 00:18:06.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.125 "is_configured": false, 00:18:06.125 "data_offset": 0, 00:18:06.125 "data_size": 7936 00:18:06.125 }, 00:18:06.125 { 00:18:06.125 "name": "BaseBdev2", 00:18:06.125 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:06.125 "is_configured": true, 00:18:06.125 "data_offset": 256, 00:18:06.125 "data_size": 7936 00:18:06.125 } 00:18:06.125 ] 00:18:06.125 }' 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.125 16:19:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.385 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.645 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.645 "name": "raid_bdev1", 00:18:06.645 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:06.645 "strip_size_kb": 0, 00:18:06.645 "state": "online", 00:18:06.645 "raid_level": "raid1", 00:18:06.645 "superblock": true, 00:18:06.645 "num_base_bdevs": 2, 00:18:06.645 "num_base_bdevs_discovered": 1, 00:18:06.645 "num_base_bdevs_operational": 1, 00:18:06.645 "base_bdevs_list": [ 00:18:06.645 { 00:18:06.645 "name": null, 00:18:06.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.645 "is_configured": false, 00:18:06.645 "data_offset": 0, 00:18:06.645 "data_size": 7936 00:18:06.645 }, 00:18:06.645 { 00:18:06.645 "name": "BaseBdev2", 00:18:06.645 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:06.645 "is_configured": true, 00:18:06.645 "data_offset": 256, 00:18:06.645 "data_size": 7936 00:18:06.645 } 00:18:06.645 ] 00:18:06.645 }' 00:18:06.645 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.645 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.645 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.645 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.645 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:06.645 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.646 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.646 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.646 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.646 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.646 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.646 [2024-09-28 16:19:21.186765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.646 [2024-09-28 16:19:21.186817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.646 [2024-09-28 16:19:21.186836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:06.646 [2024-09-28 16:19:21.186845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.646 [2024-09-28 16:19:21.187307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.646 [2024-09-28 16:19:21.187333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.646 [2024-09-28 16:19:21.187422] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:06.646 [2024-09-28 16:19:21.187435] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:06.646 [2024-09-28 16:19:21.187448] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:06.646 [2024-09-28 16:19:21.187457] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:06.646 BaseBdev1 00:18:06.646 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.646 16:19:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.585 "name": "raid_bdev1", 00:18:07.585 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:07.585 "strip_size_kb": 0, 00:18:07.585 "state": "online", 00:18:07.585 "raid_level": "raid1", 00:18:07.585 "superblock": true, 00:18:07.585 "num_base_bdevs": 2, 00:18:07.585 "num_base_bdevs_discovered": 1, 00:18:07.585 "num_base_bdevs_operational": 1, 00:18:07.585 "base_bdevs_list": [ 00:18:07.585 { 00:18:07.585 "name": null, 00:18:07.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.585 "is_configured": false, 00:18:07.585 "data_offset": 0, 00:18:07.585 "data_size": 7936 00:18:07.585 }, 00:18:07.585 { 00:18:07.585 "name": "BaseBdev2", 00:18:07.585 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:07.585 "is_configured": true, 00:18:07.585 "data_offset": 256, 00:18:07.585 "data_size": 7936 00:18:07.585 } 00:18:07.585 ] 00:18:07.585 }' 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.585 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.156 "name": "raid_bdev1", 00:18:08.156 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:08.156 "strip_size_kb": 0, 00:18:08.156 "state": "online", 00:18:08.156 "raid_level": "raid1", 00:18:08.156 "superblock": true, 00:18:08.156 "num_base_bdevs": 2, 00:18:08.156 "num_base_bdevs_discovered": 1, 00:18:08.156 "num_base_bdevs_operational": 1, 00:18:08.156 "base_bdevs_list": [ 00:18:08.156 { 00:18:08.156 "name": null, 00:18:08.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.156 "is_configured": false, 00:18:08.156 "data_offset": 0, 00:18:08.156 "data_size": 7936 00:18:08.156 }, 00:18:08.156 { 00:18:08.156 "name": "BaseBdev2", 00:18:08.156 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:08.156 "is_configured": true, 00:18:08.156 "data_offset": 256, 00:18:08.156 "data_size": 7936 00:18:08.156 } 00:18:08.156 ] 00:18:08.156 }' 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 [2024-09-28 16:19:22.748136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.156 [2024-09-28 16:19:22.748282] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.156 [2024-09-28 16:19:22.748298] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.156 request: 00:18:08.156 { 00:18:08.156 "base_bdev": "BaseBdev1", 00:18:08.156 "raid_bdev": "raid_bdev1", 00:18:08.156 "method": "bdev_raid_add_base_bdev", 00:18:08.156 "req_id": 1 00:18:08.156 } 00:18:08.156 Got JSON-RPC error response 00:18:08.156 response: 00:18:08.156 { 00:18:08.156 "code": -22, 00:18:08.156 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:08.156 } 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:08.156 16:19:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:09.095 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.096 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.356 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.356 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.356 "name": "raid_bdev1", 00:18:09.356 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:09.356 "strip_size_kb": 0, 00:18:09.356 "state": "online", 00:18:09.356 "raid_level": "raid1", 00:18:09.356 "superblock": true, 00:18:09.356 "num_base_bdevs": 2, 00:18:09.356 "num_base_bdevs_discovered": 1, 00:18:09.356 "num_base_bdevs_operational": 1, 00:18:09.356 "base_bdevs_list": [ 00:18:09.356 { 00:18:09.356 "name": null, 00:18:09.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.356 "is_configured": false, 00:18:09.356 "data_offset": 0, 00:18:09.356 "data_size": 7936 00:18:09.356 }, 00:18:09.356 { 00:18:09.356 "name": "BaseBdev2", 00:18:09.356 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:09.356 "is_configured": true, 00:18:09.356 "data_offset": 256, 00:18:09.356 "data_size": 7936 00:18:09.356 } 00:18:09.356 ] 00:18:09.356 }' 00:18:09.356 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.356 16:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.616 "name": "raid_bdev1", 00:18:09.616 "uuid": "626c687b-f6c8-4cb5-9db5-c96496ce9076", 00:18:09.616 "strip_size_kb": 0, 00:18:09.616 "state": "online", 00:18:09.616 "raid_level": "raid1", 00:18:09.616 "superblock": true, 00:18:09.616 "num_base_bdevs": 2, 00:18:09.616 "num_base_bdevs_discovered": 1, 00:18:09.616 "num_base_bdevs_operational": 1, 00:18:09.616 "base_bdevs_list": [ 00:18:09.616 { 00:18:09.616 "name": null, 00:18:09.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.616 "is_configured": false, 00:18:09.616 "data_offset": 0, 00:18:09.616 "data_size": 7936 00:18:09.616 }, 00:18:09.616 { 00:18:09.616 "name": "BaseBdev2", 00:18:09.616 "uuid": "a8f7b42d-2a3f-5b59-98ed-100450813e51", 00:18:09.616 "is_configured": true, 00:18:09.616 "data_offset": 256, 00:18:09.616 "data_size": 7936 00:18:09.616 } 00:18:09.616 ] 00:18:09.616 }' 00:18:09.616 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86518 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86518 ']' 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86518 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86518 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.876 killing process with pid 86518 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86518' 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86518 00:18:09.876 Received shutdown signal, test time was about 60.000000 seconds 00:18:09.876 00:18:09.876 Latency(us) 00:18:09.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.876 =================================================================================================================== 00:18:09.876 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.876 [2024-09-28 16:19:24.383358] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.876 [2024-09-28 16:19:24.383481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.876 [2024-09-28 16:19:24.383524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.876 [2024-09-28 16:19:24.383536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:09.876 16:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86518 00:18:10.135 [2024-09-28 16:19:24.662537] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.513 16:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:11.513 00:18:11.513 real 0m19.696s 00:18:11.513 user 0m25.425s 00:18:11.513 sys 0m2.853s 00:18:11.513 16:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.513 16:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.513 ************************************ 00:18:11.513 END TEST raid_rebuild_test_sb_4k 00:18:11.513 ************************************ 00:18:11.513 16:19:25 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:11.513 16:19:25 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:11.513 16:19:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:11.513 16:19:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.513 16:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.513 ************************************ 00:18:11.513 START TEST raid_state_function_test_sb_md_separate 00:18:11.513 ************************************ 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:11.513 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87206 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:11.514 Process raid pid: 87206 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87206' 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87206 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87206 ']' 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.514 16:19:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.514 [2024-09-28 16:19:26.013329] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:11.514 [2024-09-28 16:19:26.013442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.514 [2024-09-28 16:19:26.179902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.774 [2024-09-28 16:19:26.376319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.077 [2024-09-28 16:19:26.576776] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.077 [2024-09-28 16:19:26.576815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.371 [2024-09-28 16:19:26.834113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.371 [2024-09-28 16:19:26.834175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.371 [2024-09-28 16:19:26.834184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.371 [2024-09-28 16:19:26.834193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.371 "name": "Existed_Raid", 00:18:12.371 "uuid": "4d6ca69b-1c3c-4139-87ac-faad90c6eb6b", 00:18:12.371 "strip_size_kb": 0, 00:18:12.371 "state": "configuring", 00:18:12.371 "raid_level": "raid1", 00:18:12.371 "superblock": true, 00:18:12.371 "num_base_bdevs": 2, 00:18:12.371 "num_base_bdevs_discovered": 0, 00:18:12.371 "num_base_bdevs_operational": 2, 00:18:12.371 "base_bdevs_list": [ 00:18:12.371 { 00:18:12.371 "name": "BaseBdev1", 00:18:12.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.371 "is_configured": false, 00:18:12.371 "data_offset": 0, 00:18:12.371 "data_size": 0 00:18:12.371 }, 00:18:12.371 { 00:18:12.371 "name": "BaseBdev2", 00:18:12.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.371 "is_configured": false, 00:18:12.371 "data_offset": 0, 00:18:12.371 "data_size": 0 00:18:12.371 } 00:18:12.371 ] 00:18:12.371 }' 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.371 16:19:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.637 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:12.637 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.637 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.637 [2024-09-28 16:19:27.321198] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.637 [2024-09-28 16:19:27.321250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.898 [2024-09-28 16:19:27.333198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.898 [2024-09-28 16:19:27.333246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.898 [2024-09-28 16:19:27.333254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.898 [2024-09-28 16:19:27.333265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.898 [2024-09-28 16:19:27.413334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.898 BaseBdev1 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.898 [ 00:18:12.898 { 00:18:12.898 "name": "BaseBdev1", 00:18:12.898 "aliases": [ 00:18:12.898 "5c496bf2-23c5-49d5-aacd-79fa910d577a" 00:18:12.898 ], 00:18:12.898 "product_name": "Malloc disk", 00:18:12.898 "block_size": 4096, 00:18:12.898 "num_blocks": 8192, 00:18:12.898 "uuid": "5c496bf2-23c5-49d5-aacd-79fa910d577a", 00:18:12.898 "md_size": 32, 00:18:12.898 "md_interleave": false, 00:18:12.898 "dif_type": 0, 00:18:12.898 "assigned_rate_limits": { 00:18:12.898 "rw_ios_per_sec": 0, 00:18:12.898 "rw_mbytes_per_sec": 0, 00:18:12.898 "r_mbytes_per_sec": 0, 00:18:12.898 "w_mbytes_per_sec": 0 00:18:12.898 }, 00:18:12.898 "claimed": true, 00:18:12.898 "claim_type": "exclusive_write", 00:18:12.898 "zoned": false, 00:18:12.898 "supported_io_types": { 00:18:12.898 "read": true, 00:18:12.898 "write": true, 00:18:12.898 "unmap": true, 00:18:12.898 "flush": true, 00:18:12.898 "reset": true, 00:18:12.898 "nvme_admin": false, 00:18:12.898 "nvme_io": false, 00:18:12.898 "nvme_io_md": false, 00:18:12.898 "write_zeroes": true, 00:18:12.898 "zcopy": true, 00:18:12.898 "get_zone_info": false, 00:18:12.898 "zone_management": false, 00:18:12.898 "zone_append": false, 00:18:12.898 "compare": false, 00:18:12.898 "compare_and_write": false, 00:18:12.898 "abort": true, 00:18:12.898 "seek_hole": false, 00:18:12.898 "seek_data": false, 00:18:12.898 "copy": true, 00:18:12.898 "nvme_iov_md": false 00:18:12.898 }, 00:18:12.898 "memory_domains": [ 00:18:12.898 { 00:18:12.898 "dma_device_id": "system", 00:18:12.898 "dma_device_type": 1 00:18:12.898 }, 00:18:12.898 { 00:18:12.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.898 "dma_device_type": 2 00:18:12.898 } 00:18:12.898 ], 00:18:12.898 "driver_specific": {} 00:18:12.898 } 00:18:12.898 ] 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.898 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.899 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.899 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.899 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.899 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.899 "name": "Existed_Raid", 00:18:12.899 "uuid": "f48cc394-6aac-4d8a-81c6-aa16f1ee8979", 00:18:12.899 "strip_size_kb": 0, 00:18:12.899 "state": "configuring", 00:18:12.899 "raid_level": "raid1", 00:18:12.899 "superblock": true, 00:18:12.899 "num_base_bdevs": 2, 00:18:12.899 "num_base_bdevs_discovered": 1, 00:18:12.899 "num_base_bdevs_operational": 2, 00:18:12.899 "base_bdevs_list": [ 00:18:12.899 { 00:18:12.899 "name": "BaseBdev1", 00:18:12.899 "uuid": "5c496bf2-23c5-49d5-aacd-79fa910d577a", 00:18:12.899 "is_configured": true, 00:18:12.899 "data_offset": 256, 00:18:12.899 "data_size": 7936 00:18:12.899 }, 00:18:12.899 { 00:18:12.899 "name": "BaseBdev2", 00:18:12.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.899 "is_configured": false, 00:18:12.899 "data_offset": 0, 00:18:12.899 "data_size": 0 00:18:12.899 } 00:18:12.899 ] 00:18:12.899 }' 00:18:12.899 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.899 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.468 [2024-09-28 16:19:27.872589] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.468 [2024-09-28 16:19:27.872631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.468 [2024-09-28 16:19:27.884626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.468 [2024-09-28 16:19:27.886363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.468 [2024-09-28 16:19:27.886403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.468 "name": "Existed_Raid", 00:18:13.468 "uuid": "9380e6de-aaad-4e9a-8537-50790fad131e", 00:18:13.468 "strip_size_kb": 0, 00:18:13.468 "state": "configuring", 00:18:13.468 "raid_level": "raid1", 00:18:13.468 "superblock": true, 00:18:13.468 "num_base_bdevs": 2, 00:18:13.468 "num_base_bdevs_discovered": 1, 00:18:13.468 "num_base_bdevs_operational": 2, 00:18:13.468 "base_bdevs_list": [ 00:18:13.468 { 00:18:13.468 "name": "BaseBdev1", 00:18:13.468 "uuid": "5c496bf2-23c5-49d5-aacd-79fa910d577a", 00:18:13.468 "is_configured": true, 00:18:13.468 "data_offset": 256, 00:18:13.468 "data_size": 7936 00:18:13.468 }, 00:18:13.468 { 00:18:13.468 "name": "BaseBdev2", 00:18:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.468 "is_configured": false, 00:18:13.468 "data_offset": 0, 00:18:13.468 "data_size": 0 00:18:13.468 } 00:18:13.468 ] 00:18:13.468 }' 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.468 16:19:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.728 [2024-09-28 16:19:28.333948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.728 [2024-09-28 16:19:28.334164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:13.728 [2024-09-28 16:19:28.334177] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.728 [2024-09-28 16:19:28.334275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:13.728 [2024-09-28 16:19:28.334404] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:13.728 [2024-09-28 16:19:28.334425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:13.728 [2024-09-28 16:19:28.334526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.728 BaseBdev2 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.728 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.729 [ 00:18:13.729 { 00:18:13.729 "name": "BaseBdev2", 00:18:13.729 "aliases": [ 00:18:13.729 "cb31e91a-fa42-4f1b-83d6-b7bf70699e8e" 00:18:13.729 ], 00:18:13.729 "product_name": "Malloc disk", 00:18:13.729 "block_size": 4096, 00:18:13.729 "num_blocks": 8192, 00:18:13.729 "uuid": "cb31e91a-fa42-4f1b-83d6-b7bf70699e8e", 00:18:13.729 "md_size": 32, 00:18:13.729 "md_interleave": false, 00:18:13.729 "dif_type": 0, 00:18:13.729 "assigned_rate_limits": { 00:18:13.729 "rw_ios_per_sec": 0, 00:18:13.729 "rw_mbytes_per_sec": 0, 00:18:13.729 "r_mbytes_per_sec": 0, 00:18:13.729 "w_mbytes_per_sec": 0 00:18:13.729 }, 00:18:13.729 "claimed": true, 00:18:13.729 "claim_type": "exclusive_write", 00:18:13.729 "zoned": false, 00:18:13.729 "supported_io_types": { 00:18:13.729 "read": true, 00:18:13.729 "write": true, 00:18:13.729 "unmap": true, 00:18:13.729 "flush": true, 00:18:13.729 "reset": true, 00:18:13.729 "nvme_admin": false, 00:18:13.729 "nvme_io": false, 00:18:13.729 "nvme_io_md": false, 00:18:13.729 "write_zeroes": true, 00:18:13.729 "zcopy": true, 00:18:13.729 "get_zone_info": false, 00:18:13.729 "zone_management": false, 00:18:13.729 "zone_append": false, 00:18:13.729 "compare": false, 00:18:13.729 "compare_and_write": false, 00:18:13.729 "abort": true, 00:18:13.729 "seek_hole": false, 00:18:13.729 "seek_data": false, 00:18:13.729 "copy": true, 00:18:13.729 "nvme_iov_md": false 00:18:13.729 }, 00:18:13.729 "memory_domains": [ 00:18:13.729 { 00:18:13.729 "dma_device_id": "system", 00:18:13.729 "dma_device_type": 1 00:18:13.729 }, 00:18:13.729 { 00:18:13.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.729 "dma_device_type": 2 00:18:13.729 } 00:18:13.729 ], 00:18:13.729 "driver_specific": {} 00:18:13.729 } 00:18:13.729 ] 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.729 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.988 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.988 "name": "Existed_Raid", 00:18:13.988 "uuid": "9380e6de-aaad-4e9a-8537-50790fad131e", 00:18:13.988 "strip_size_kb": 0, 00:18:13.988 "state": "online", 00:18:13.988 "raid_level": "raid1", 00:18:13.988 "superblock": true, 00:18:13.988 "num_base_bdevs": 2, 00:18:13.988 "num_base_bdevs_discovered": 2, 00:18:13.988 "num_base_bdevs_operational": 2, 00:18:13.988 "base_bdevs_list": [ 00:18:13.988 { 00:18:13.988 "name": "BaseBdev1", 00:18:13.988 "uuid": "5c496bf2-23c5-49d5-aacd-79fa910d577a", 00:18:13.988 "is_configured": true, 00:18:13.988 "data_offset": 256, 00:18:13.988 "data_size": 7936 00:18:13.988 }, 00:18:13.988 { 00:18:13.988 "name": "BaseBdev2", 00:18:13.988 "uuid": "cb31e91a-fa42-4f1b-83d6-b7bf70699e8e", 00:18:13.988 "is_configured": true, 00:18:13.988 "data_offset": 256, 00:18:13.988 "data_size": 7936 00:18:13.988 } 00:18:13.988 ] 00:18:13.988 }' 00:18:13.988 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.988 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.248 [2024-09-28 16:19:28.793487] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.248 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.248 "name": "Existed_Raid", 00:18:14.248 "aliases": [ 00:18:14.248 "9380e6de-aaad-4e9a-8537-50790fad131e" 00:18:14.248 ], 00:18:14.248 "product_name": "Raid Volume", 00:18:14.248 "block_size": 4096, 00:18:14.248 "num_blocks": 7936, 00:18:14.248 "uuid": "9380e6de-aaad-4e9a-8537-50790fad131e", 00:18:14.248 "md_size": 32, 00:18:14.248 "md_interleave": false, 00:18:14.248 "dif_type": 0, 00:18:14.248 "assigned_rate_limits": { 00:18:14.248 "rw_ios_per_sec": 0, 00:18:14.248 "rw_mbytes_per_sec": 0, 00:18:14.248 "r_mbytes_per_sec": 0, 00:18:14.248 "w_mbytes_per_sec": 0 00:18:14.248 }, 00:18:14.248 "claimed": false, 00:18:14.248 "zoned": false, 00:18:14.248 "supported_io_types": { 00:18:14.248 "read": true, 00:18:14.248 "write": true, 00:18:14.248 "unmap": false, 00:18:14.248 "flush": false, 00:18:14.248 "reset": true, 00:18:14.248 "nvme_admin": false, 00:18:14.248 "nvme_io": false, 00:18:14.248 "nvme_io_md": false, 00:18:14.248 "write_zeroes": true, 00:18:14.248 "zcopy": false, 00:18:14.248 "get_zone_info": false, 00:18:14.248 "zone_management": false, 00:18:14.248 "zone_append": false, 00:18:14.248 "compare": false, 00:18:14.248 "compare_and_write": false, 00:18:14.249 "abort": false, 00:18:14.249 "seek_hole": false, 00:18:14.249 "seek_data": false, 00:18:14.249 "copy": false, 00:18:14.249 "nvme_iov_md": false 00:18:14.249 }, 00:18:14.249 "memory_domains": [ 00:18:14.249 { 00:18:14.249 "dma_device_id": "system", 00:18:14.249 "dma_device_type": 1 00:18:14.249 }, 00:18:14.249 { 00:18:14.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.249 "dma_device_type": 2 00:18:14.249 }, 00:18:14.249 { 00:18:14.249 "dma_device_id": "system", 00:18:14.249 "dma_device_type": 1 00:18:14.249 }, 00:18:14.249 { 00:18:14.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.249 "dma_device_type": 2 00:18:14.249 } 00:18:14.249 ], 00:18:14.249 "driver_specific": { 00:18:14.249 "raid": { 00:18:14.249 "uuid": "9380e6de-aaad-4e9a-8537-50790fad131e", 00:18:14.249 "strip_size_kb": 0, 00:18:14.249 "state": "online", 00:18:14.249 "raid_level": "raid1", 00:18:14.249 "superblock": true, 00:18:14.249 "num_base_bdevs": 2, 00:18:14.249 "num_base_bdevs_discovered": 2, 00:18:14.249 "num_base_bdevs_operational": 2, 00:18:14.249 "base_bdevs_list": [ 00:18:14.249 { 00:18:14.249 "name": "BaseBdev1", 00:18:14.249 "uuid": "5c496bf2-23c5-49d5-aacd-79fa910d577a", 00:18:14.249 "is_configured": true, 00:18:14.249 "data_offset": 256, 00:18:14.249 "data_size": 7936 00:18:14.249 }, 00:18:14.249 { 00:18:14.249 "name": "BaseBdev2", 00:18:14.249 "uuid": "cb31e91a-fa42-4f1b-83d6-b7bf70699e8e", 00:18:14.249 "is_configured": true, 00:18:14.249 "data_offset": 256, 00:18:14.249 "data_size": 7936 00:18:14.249 } 00:18:14.249 ] 00:18:14.249 } 00:18:14.249 } 00:18:14.249 }' 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:14.249 BaseBdev2' 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.249 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.512 16:19:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.512 [2024-09-28 16:19:28.996938] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.512 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.513 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.513 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.513 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.513 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.513 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.513 "name": "Existed_Raid", 00:18:14.513 "uuid": "9380e6de-aaad-4e9a-8537-50790fad131e", 00:18:14.513 "strip_size_kb": 0, 00:18:14.513 "state": "online", 00:18:14.513 "raid_level": "raid1", 00:18:14.513 "superblock": true, 00:18:14.513 "num_base_bdevs": 2, 00:18:14.513 "num_base_bdevs_discovered": 1, 00:18:14.513 "num_base_bdevs_operational": 1, 00:18:14.513 "base_bdevs_list": [ 00:18:14.513 { 00:18:14.513 "name": null, 00:18:14.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.513 "is_configured": false, 00:18:14.513 "data_offset": 0, 00:18:14.513 "data_size": 7936 00:18:14.513 }, 00:18:14.513 { 00:18:14.513 "name": "BaseBdev2", 00:18:14.513 "uuid": "cb31e91a-fa42-4f1b-83d6-b7bf70699e8e", 00:18:14.513 "is_configured": true, 00:18:14.513 "data_offset": 256, 00:18:14.513 "data_size": 7936 00:18:14.513 } 00:18:14.513 ] 00:18:14.513 }' 00:18:14.513 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.513 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.081 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.081 [2024-09-28 16:19:29.542887] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.081 [2024-09-28 16:19:29.542986] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.081 [2024-09-28 16:19:29.637951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.082 [2024-09-28 16:19:29.638005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.082 [2024-09-28 16:19:29.638016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87206 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87206 ']' 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87206 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87206 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:15.082 killing process with pid 87206 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87206' 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87206 00:18:15.082 [2024-09-28 16:19:29.737726] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.082 16:19:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87206 00:18:15.082 [2024-09-28 16:19:29.753259] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.463 16:19:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:16.463 00:18:16.463 real 0m5.028s 00:18:16.463 user 0m7.140s 00:18:16.463 sys 0m0.879s 00:18:16.463 16:19:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.463 16:19:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.463 ************************************ 00:18:16.463 END TEST raid_state_function_test_sb_md_separate 00:18:16.463 ************************************ 00:18:16.463 16:19:30 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:16.463 16:19:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:16.463 16:19:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.463 16:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.463 ************************************ 00:18:16.463 START TEST raid_superblock_test_md_separate 00:18:16.463 ************************************ 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87457 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87457 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87457 ']' 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:16.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:16.463 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.463 [2024-09-28 16:19:31.101007] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:16.463 [2024-09-28 16:19:31.101117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87457 ] 00:18:16.723 [2024-09-28 16:19:31.262799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.983 [2024-09-28 16:19:31.455886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.983 [2024-09-28 16:19:31.648378] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.983 [2024-09-28 16:19:31.648426] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.243 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.503 malloc1 00:18:17.503 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.503 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.503 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.503 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.503 [2024-09-28 16:19:31.958518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.503 [2024-09-28 16:19:31.958590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.503 [2024-09-28 16:19:31.958617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:17.504 [2024-09-28 16:19:31.958627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.504 [2024-09-28 16:19:31.960404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.504 [2024-09-28 16:19:31.960440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.504 pt1 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.504 16:19:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.504 malloc2 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.504 [2024-09-28 16:19:32.042259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:17.504 [2024-09-28 16:19:32.042311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.504 [2024-09-28 16:19:32.042334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:17.504 [2024-09-28 16:19:32.042343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.504 [2024-09-28 16:19:32.044080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.504 [2024-09-28 16:19:32.044117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:17.504 pt2 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.504 [2024-09-28 16:19:32.054300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.504 [2024-09-28 16:19:32.055971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.504 [2024-09-28 16:19:32.056142] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:17.504 [2024-09-28 16:19:32.056156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:17.504 [2024-09-28 16:19:32.056242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:17.504 [2024-09-28 16:19:32.056375] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:17.504 [2024-09-28 16:19:32.056395] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:17.504 [2024-09-28 16:19:32.056490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.504 "name": "raid_bdev1", 00:18:17.504 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:17.504 "strip_size_kb": 0, 00:18:17.504 "state": "online", 00:18:17.504 "raid_level": "raid1", 00:18:17.504 "superblock": true, 00:18:17.504 "num_base_bdevs": 2, 00:18:17.504 "num_base_bdevs_discovered": 2, 00:18:17.504 "num_base_bdevs_operational": 2, 00:18:17.504 "base_bdevs_list": [ 00:18:17.504 { 00:18:17.504 "name": "pt1", 00:18:17.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.504 "is_configured": true, 00:18:17.504 "data_offset": 256, 00:18:17.504 "data_size": 7936 00:18:17.504 }, 00:18:17.504 { 00:18:17.504 "name": "pt2", 00:18:17.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.504 "is_configured": true, 00:18:17.504 "data_offset": 256, 00:18:17.504 "data_size": 7936 00:18:17.504 } 00:18:17.504 ] 00:18:17.504 }' 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.504 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.074 [2024-09-28 16:19:32.493710] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.074 "name": "raid_bdev1", 00:18:18.074 "aliases": [ 00:18:18.074 "6f6c67ea-e83b-4df6-8819-81ea0cdbb112" 00:18:18.074 ], 00:18:18.074 "product_name": "Raid Volume", 00:18:18.074 "block_size": 4096, 00:18:18.074 "num_blocks": 7936, 00:18:18.074 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:18.074 "md_size": 32, 00:18:18.074 "md_interleave": false, 00:18:18.074 "dif_type": 0, 00:18:18.074 "assigned_rate_limits": { 00:18:18.074 "rw_ios_per_sec": 0, 00:18:18.074 "rw_mbytes_per_sec": 0, 00:18:18.074 "r_mbytes_per_sec": 0, 00:18:18.074 "w_mbytes_per_sec": 0 00:18:18.074 }, 00:18:18.074 "claimed": false, 00:18:18.074 "zoned": false, 00:18:18.074 "supported_io_types": { 00:18:18.074 "read": true, 00:18:18.074 "write": true, 00:18:18.074 "unmap": false, 00:18:18.074 "flush": false, 00:18:18.074 "reset": true, 00:18:18.074 "nvme_admin": false, 00:18:18.074 "nvme_io": false, 00:18:18.074 "nvme_io_md": false, 00:18:18.074 "write_zeroes": true, 00:18:18.074 "zcopy": false, 00:18:18.074 "get_zone_info": false, 00:18:18.074 "zone_management": false, 00:18:18.074 "zone_append": false, 00:18:18.074 "compare": false, 00:18:18.074 "compare_and_write": false, 00:18:18.074 "abort": false, 00:18:18.074 "seek_hole": false, 00:18:18.074 "seek_data": false, 00:18:18.074 "copy": false, 00:18:18.074 "nvme_iov_md": false 00:18:18.074 }, 00:18:18.074 "memory_domains": [ 00:18:18.074 { 00:18:18.074 "dma_device_id": "system", 00:18:18.074 "dma_device_type": 1 00:18:18.074 }, 00:18:18.074 { 00:18:18.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.074 "dma_device_type": 2 00:18:18.074 }, 00:18:18.074 { 00:18:18.074 "dma_device_id": "system", 00:18:18.074 "dma_device_type": 1 00:18:18.074 }, 00:18:18.074 { 00:18:18.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.074 "dma_device_type": 2 00:18:18.074 } 00:18:18.074 ], 00:18:18.074 "driver_specific": { 00:18:18.074 "raid": { 00:18:18.074 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:18.074 "strip_size_kb": 0, 00:18:18.074 "state": "online", 00:18:18.074 "raid_level": "raid1", 00:18:18.074 "superblock": true, 00:18:18.074 "num_base_bdevs": 2, 00:18:18.074 "num_base_bdevs_discovered": 2, 00:18:18.074 "num_base_bdevs_operational": 2, 00:18:18.074 "base_bdevs_list": [ 00:18:18.074 { 00:18:18.074 "name": "pt1", 00:18:18.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.074 "is_configured": true, 00:18:18.074 "data_offset": 256, 00:18:18.074 "data_size": 7936 00:18:18.074 }, 00:18:18.074 { 00:18:18.074 "name": "pt2", 00:18:18.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.074 "is_configured": true, 00:18:18.074 "data_offset": 256, 00:18:18.074 "data_size": 7936 00:18:18.074 } 00:18:18.074 ] 00:18:18.074 } 00:18:18.074 } 00:18:18.074 }' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:18.074 pt2' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.074 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.075 [2024-09-28 16:19:32.713324] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.075 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.075 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6f6c67ea-e83b-4df6-8819-81ea0cdbb112 00:18:18.075 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 6f6c67ea-e83b-4df6-8819-81ea0cdbb112 ']' 00:18:18.075 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:18.075 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.075 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.075 [2024-09-28 16:19:32.756999] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.075 [2024-09-28 16:19:32.757024] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.075 [2024-09-28 16:19:32.757084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.075 [2024-09-28 16:19:32.757133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.075 [2024-09-28 16:19:32.757143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 [2024-09-28 16:19:32.896822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:18.335 [2024-09-28 16:19:32.898550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:18.335 [2024-09-28 16:19:32.898666] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:18.335 [2024-09-28 16:19:32.898749] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:18.335 [2024-09-28 16:19:32.898786] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.335 [2024-09-28 16:19:32.898807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:18.335 request: 00:18:18.335 { 00:18:18.335 "name": "raid_bdev1", 00:18:18.335 "raid_level": "raid1", 00:18:18.335 "base_bdevs": [ 00:18:18.335 "malloc1", 00:18:18.335 "malloc2" 00:18:18.335 ], 00:18:18.335 "superblock": false, 00:18:18.335 "method": "bdev_raid_create", 00:18:18.335 "req_id": 1 00:18:18.335 } 00:18:18.335 Got JSON-RPC error response 00:18:18.335 response: 00:18:18.335 { 00:18:18.335 "code": -17, 00:18:18.335 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:18.335 } 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 [2024-09-28 16:19:32.964650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.335 [2024-09-28 16:19:32.964749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.335 [2024-09-28 16:19:32.964777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:18.335 [2024-09-28 16:19:32.964802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.335 [2024-09-28 16:19:32.966538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.335 [2024-09-28 16:19:32.966609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.335 [2024-09-28 16:19:32.966666] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:18.335 [2024-09-28 16:19:32.966740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.335 pt1 00:18:18.335 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.336 16:19:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.595 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.595 "name": "raid_bdev1", 00:18:18.595 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:18.595 "strip_size_kb": 0, 00:18:18.595 "state": "configuring", 00:18:18.595 "raid_level": "raid1", 00:18:18.595 "superblock": true, 00:18:18.595 "num_base_bdevs": 2, 00:18:18.595 "num_base_bdevs_discovered": 1, 00:18:18.595 "num_base_bdevs_operational": 2, 00:18:18.595 "base_bdevs_list": [ 00:18:18.595 { 00:18:18.595 "name": "pt1", 00:18:18.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.595 "is_configured": true, 00:18:18.595 "data_offset": 256, 00:18:18.595 "data_size": 7936 00:18:18.595 }, 00:18:18.595 { 00:18:18.595 "name": null, 00:18:18.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.595 "is_configured": false, 00:18:18.595 "data_offset": 256, 00:18:18.595 "data_size": 7936 00:18:18.595 } 00:18:18.595 ] 00:18:18.595 }' 00:18:18.595 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.595 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.855 [2024-09-28 16:19:33.395905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:18.855 [2024-09-28 16:19:33.395959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.855 [2024-09-28 16:19:33.395975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:18.855 [2024-09-28 16:19:33.395985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.855 [2024-09-28 16:19:33.396141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.855 [2024-09-28 16:19:33.396157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:18.855 [2024-09-28 16:19:33.396190] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:18.855 [2024-09-28 16:19:33.396207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.855 [2024-09-28 16:19:33.396313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:18.855 [2024-09-28 16:19:33.396324] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:18.855 [2024-09-28 16:19:33.396382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:18.855 [2024-09-28 16:19:33.396492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:18.855 [2024-09-28 16:19:33.396499] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:18.855 [2024-09-28 16:19:33.396583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.855 pt2 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.855 "name": "raid_bdev1", 00:18:18.855 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:18.855 "strip_size_kb": 0, 00:18:18.855 "state": "online", 00:18:18.855 "raid_level": "raid1", 00:18:18.855 "superblock": true, 00:18:18.855 "num_base_bdevs": 2, 00:18:18.855 "num_base_bdevs_discovered": 2, 00:18:18.855 "num_base_bdevs_operational": 2, 00:18:18.855 "base_bdevs_list": [ 00:18:18.855 { 00:18:18.855 "name": "pt1", 00:18:18.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.855 "is_configured": true, 00:18:18.855 "data_offset": 256, 00:18:18.855 "data_size": 7936 00:18:18.855 }, 00:18:18.855 { 00:18:18.855 "name": "pt2", 00:18:18.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.855 "is_configured": true, 00:18:18.855 "data_offset": 256, 00:18:18.855 "data_size": 7936 00:18:18.855 } 00:18:18.855 ] 00:18:18.855 }' 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.855 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.425 [2024-09-28 16:19:33.851427] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.425 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.425 "name": "raid_bdev1", 00:18:19.425 "aliases": [ 00:18:19.425 "6f6c67ea-e83b-4df6-8819-81ea0cdbb112" 00:18:19.425 ], 00:18:19.425 "product_name": "Raid Volume", 00:18:19.425 "block_size": 4096, 00:18:19.425 "num_blocks": 7936, 00:18:19.425 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:19.425 "md_size": 32, 00:18:19.425 "md_interleave": false, 00:18:19.425 "dif_type": 0, 00:18:19.425 "assigned_rate_limits": { 00:18:19.425 "rw_ios_per_sec": 0, 00:18:19.425 "rw_mbytes_per_sec": 0, 00:18:19.425 "r_mbytes_per_sec": 0, 00:18:19.425 "w_mbytes_per_sec": 0 00:18:19.425 }, 00:18:19.425 "claimed": false, 00:18:19.425 "zoned": false, 00:18:19.425 "supported_io_types": { 00:18:19.425 "read": true, 00:18:19.425 "write": true, 00:18:19.425 "unmap": false, 00:18:19.425 "flush": false, 00:18:19.425 "reset": true, 00:18:19.425 "nvme_admin": false, 00:18:19.425 "nvme_io": false, 00:18:19.425 "nvme_io_md": false, 00:18:19.425 "write_zeroes": true, 00:18:19.425 "zcopy": false, 00:18:19.425 "get_zone_info": false, 00:18:19.425 "zone_management": false, 00:18:19.425 "zone_append": false, 00:18:19.425 "compare": false, 00:18:19.425 "compare_and_write": false, 00:18:19.425 "abort": false, 00:18:19.425 "seek_hole": false, 00:18:19.425 "seek_data": false, 00:18:19.425 "copy": false, 00:18:19.425 "nvme_iov_md": false 00:18:19.425 }, 00:18:19.425 "memory_domains": [ 00:18:19.425 { 00:18:19.425 "dma_device_id": "system", 00:18:19.425 "dma_device_type": 1 00:18:19.425 }, 00:18:19.425 { 00:18:19.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.425 "dma_device_type": 2 00:18:19.425 }, 00:18:19.425 { 00:18:19.425 "dma_device_id": "system", 00:18:19.425 "dma_device_type": 1 00:18:19.425 }, 00:18:19.425 { 00:18:19.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.425 "dma_device_type": 2 00:18:19.425 } 00:18:19.425 ], 00:18:19.425 "driver_specific": { 00:18:19.425 "raid": { 00:18:19.425 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:19.425 "strip_size_kb": 0, 00:18:19.425 "state": "online", 00:18:19.425 "raid_level": "raid1", 00:18:19.425 "superblock": true, 00:18:19.425 "num_base_bdevs": 2, 00:18:19.425 "num_base_bdevs_discovered": 2, 00:18:19.425 "num_base_bdevs_operational": 2, 00:18:19.426 "base_bdevs_list": [ 00:18:19.426 { 00:18:19.426 "name": "pt1", 00:18:19.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.426 "is_configured": true, 00:18:19.426 "data_offset": 256, 00:18:19.426 "data_size": 7936 00:18:19.426 }, 00:18:19.426 { 00:18:19.426 "name": "pt2", 00:18:19.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.426 "is_configured": true, 00:18:19.426 "data_offset": 256, 00:18:19.426 "data_size": 7936 00:18:19.426 } 00:18:19.426 ] 00:18:19.426 } 00:18:19.426 } 00:18:19.426 }' 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:19.426 pt2' 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.426 16:19:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:19.426 [2024-09-28 16:19:34.083002] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 6f6c67ea-e83b-4df6-8819-81ea0cdbb112 '!=' 6f6c67ea-e83b-4df6-8819-81ea0cdbb112 ']' 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.426 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.686 [2024-09-28 16:19:34.114794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.686 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.687 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.687 "name": "raid_bdev1", 00:18:19.687 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:19.687 "strip_size_kb": 0, 00:18:19.687 "state": "online", 00:18:19.687 "raid_level": "raid1", 00:18:19.687 "superblock": true, 00:18:19.687 "num_base_bdevs": 2, 00:18:19.687 "num_base_bdevs_discovered": 1, 00:18:19.687 "num_base_bdevs_operational": 1, 00:18:19.687 "base_bdevs_list": [ 00:18:19.687 { 00:18:19.687 "name": null, 00:18:19.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.687 "is_configured": false, 00:18:19.687 "data_offset": 0, 00:18:19.687 "data_size": 7936 00:18:19.687 }, 00:18:19.687 { 00:18:19.687 "name": "pt2", 00:18:19.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.687 "is_configured": true, 00:18:19.687 "data_offset": 256, 00:18:19.687 "data_size": 7936 00:18:19.687 } 00:18:19.687 ] 00:18:19.687 }' 00:18:19.687 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.687 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.947 [2024-09-28 16:19:34.577940] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.947 [2024-09-28 16:19:34.578012] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.947 [2024-09-28 16:19:34.578078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.947 [2024-09-28 16:19:34.578130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.947 [2024-09-28 16:19:34.578161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.947 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.207 [2024-09-28 16:19:34.653806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.207 [2024-09-28 16:19:34.653853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.207 [2024-09-28 16:19:34.653867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:20.207 [2024-09-28 16:19:34.653877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.207 [2024-09-28 16:19:34.655675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.207 [2024-09-28 16:19:34.655716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.207 [2024-09-28 16:19:34.655756] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.207 [2024-09-28 16:19:34.655802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.207 [2024-09-28 16:19:34.655887] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:20.207 [2024-09-28 16:19:34.655899] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.207 [2024-09-28 16:19:34.655965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:20.207 [2024-09-28 16:19:34.656075] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:20.207 [2024-09-28 16:19:34.656083] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:20.207 [2024-09-28 16:19:34.656165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.207 pt2 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.207 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.207 "name": "raid_bdev1", 00:18:20.207 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:20.208 "strip_size_kb": 0, 00:18:20.208 "state": "online", 00:18:20.208 "raid_level": "raid1", 00:18:20.208 "superblock": true, 00:18:20.208 "num_base_bdevs": 2, 00:18:20.208 "num_base_bdevs_discovered": 1, 00:18:20.208 "num_base_bdevs_operational": 1, 00:18:20.208 "base_bdevs_list": [ 00:18:20.208 { 00:18:20.208 "name": null, 00:18:20.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.208 "is_configured": false, 00:18:20.208 "data_offset": 256, 00:18:20.208 "data_size": 7936 00:18:20.208 }, 00:18:20.208 { 00:18:20.208 "name": "pt2", 00:18:20.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.208 "is_configured": true, 00:18:20.208 "data_offset": 256, 00:18:20.208 "data_size": 7936 00:18:20.208 } 00:18:20.208 ] 00:18:20.208 }' 00:18:20.208 16:19:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.208 16:19:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.467 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.467 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.467 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.467 [2024-09-28 16:19:35.145043] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.467 [2024-09-28 16:19:35.145119] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.467 [2024-09-28 16:19:35.145176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.467 [2024-09-28 16:19:35.145236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.467 [2024-09-28 16:19:35.145267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:20.467 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.727 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 [2024-09-28 16:19:35.192992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.727 [2024-09-28 16:19:35.193080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.727 [2024-09-28 16:19:35.193111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:20.727 [2024-09-28 16:19:35.193121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.727 [2024-09-28 16:19:35.194936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.727 [2024-09-28 16:19:35.194972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.727 [2024-09-28 16:19:35.195013] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:20.727 [2024-09-28 16:19:35.195046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.728 [2024-09-28 16:19:35.195150] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:20.728 [2024-09-28 16:19:35.195159] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.728 [2024-09-28 16:19:35.195175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:20.728 [2024-09-28 16:19:35.195263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.728 [2024-09-28 16:19:35.195322] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:20.728 [2024-09-28 16:19:35.195331] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.728 [2024-09-28 16:19:35.195403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:20.728 [2024-09-28 16:19:35.195501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:20.728 [2024-09-28 16:19:35.195511] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:20.728 [2024-09-28 16:19:35.195598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.728 pt1 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.728 "name": "raid_bdev1", 00:18:20.728 "uuid": "6f6c67ea-e83b-4df6-8819-81ea0cdbb112", 00:18:20.728 "strip_size_kb": 0, 00:18:20.728 "state": "online", 00:18:20.728 "raid_level": "raid1", 00:18:20.728 "superblock": true, 00:18:20.728 "num_base_bdevs": 2, 00:18:20.728 "num_base_bdevs_discovered": 1, 00:18:20.728 "num_base_bdevs_operational": 1, 00:18:20.728 "base_bdevs_list": [ 00:18:20.728 { 00:18:20.728 "name": null, 00:18:20.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.728 "is_configured": false, 00:18:20.728 "data_offset": 256, 00:18:20.728 "data_size": 7936 00:18:20.728 }, 00:18:20.728 { 00:18:20.728 "name": "pt2", 00:18:20.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.728 "is_configured": true, 00:18:20.728 "data_offset": 256, 00:18:20.728 "data_size": 7936 00:18:20.728 } 00:18:20.728 ] 00:18:20.728 }' 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.728 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.988 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:20.988 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:20.988 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.988 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.988 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.248 [2024-09-28 16:19:35.692328] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 6f6c67ea-e83b-4df6-8819-81ea0cdbb112 '!=' 6f6c67ea-e83b-4df6-8819-81ea0cdbb112 ']' 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87457 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87457 ']' 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87457 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87457 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.248 killing process with pid 87457 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87457' 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87457 00:18:21.248 [2024-09-28 16:19:35.779999] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.248 [2024-09-28 16:19:35.780061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.248 [2024-09-28 16:19:35.780095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.248 [2024-09-28 16:19:35.780108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:21.248 16:19:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87457 00:18:21.508 [2024-09-28 16:19:35.986407] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.890 16:19:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:22.890 00:18:22.890 real 0m6.155s 00:18:22.890 user 0m9.254s 00:18:22.890 sys 0m1.123s 00:18:22.890 16:19:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:22.890 16:19:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.890 ************************************ 00:18:22.890 END TEST raid_superblock_test_md_separate 00:18:22.890 ************************************ 00:18:22.890 16:19:37 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:22.890 16:19:37 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:22.890 16:19:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:22.890 16:19:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.890 16:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.890 ************************************ 00:18:22.890 START TEST raid_rebuild_test_sb_md_separate 00:18:22.890 ************************************ 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87781 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87781 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87781 ']' 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.890 16:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.890 [2024-09-28 16:19:37.349601] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:22.890 [2024-09-28 16:19:37.349805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:22.890 Zero copy mechanism will not be used. 00:18:22.891 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87781 ] 00:18:22.891 [2024-09-28 16:19:37.512753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.151 [2024-09-28 16:19:37.710701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.410 [2024-09-28 16:19:37.914877] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.410 [2024-09-28 16:19:37.914982] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.670 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 BaseBdev1_malloc 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 [2024-09-28 16:19:38.212563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:23.671 [2024-09-28 16:19:38.212723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.671 [2024-09-28 16:19:38.212765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:23.671 [2024-09-28 16:19:38.212796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.671 [2024-09-28 16:19:38.214557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.671 [2024-09-28 16:19:38.214632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:23.671 BaseBdev1 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 BaseBdev2_malloc 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 [2024-09-28 16:19:38.292410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:23.671 [2024-09-28 16:19:38.292471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.671 [2024-09-28 16:19:38.292490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:23.671 [2024-09-28 16:19:38.292500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.671 [2024-09-28 16:19:38.294270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.671 [2024-09-28 16:19:38.294307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:23.671 BaseBdev2 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 spare_malloc 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 spare_delay 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.671 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 [2024-09-28 16:19:38.353692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.671 [2024-09-28 16:19:38.353751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.671 [2024-09-28 16:19:38.353772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:23.671 [2024-09-28 16:19:38.353793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.931 [2024-09-28 16:19:38.355651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.931 [2024-09-28 16:19:38.355696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.931 spare 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.931 [2024-09-28 16:19:38.365715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.931 [2024-09-28 16:19:38.367360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.931 [2024-09-28 16:19:38.367572] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:23.931 [2024-09-28 16:19:38.367588] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:23.931 [2024-09-28 16:19:38.367655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:23.931 [2024-09-28 16:19:38.367766] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:23.931 [2024-09-28 16:19:38.367774] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:23.931 [2024-09-28 16:19:38.367882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.931 "name": "raid_bdev1", 00:18:23.931 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:23.931 "strip_size_kb": 0, 00:18:23.931 "state": "online", 00:18:23.931 "raid_level": "raid1", 00:18:23.931 "superblock": true, 00:18:23.931 "num_base_bdevs": 2, 00:18:23.931 "num_base_bdevs_discovered": 2, 00:18:23.931 "num_base_bdevs_operational": 2, 00:18:23.931 "base_bdevs_list": [ 00:18:23.931 { 00:18:23.931 "name": "BaseBdev1", 00:18:23.931 "uuid": "20a16b9a-afaa-5176-ad1b-f221dfdac2af", 00:18:23.931 "is_configured": true, 00:18:23.931 "data_offset": 256, 00:18:23.931 "data_size": 7936 00:18:23.931 }, 00:18:23.931 { 00:18:23.931 "name": "BaseBdev2", 00:18:23.931 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:23.931 "is_configured": true, 00:18:23.931 "data_offset": 256, 00:18:23.931 "data_size": 7936 00:18:23.931 } 00:18:23.931 ] 00:18:23.931 }' 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.931 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.192 [2024-09-28 16:19:38.833198] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.192 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.452 16:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:24.452 [2024-09-28 16:19:39.080627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:24.452 /dev/nbd0 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.452 1+0 records in 00:18:24.452 1+0 records out 00:18:24.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287994 s, 14.2 MB/s 00:18:24.452 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:24.713 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:25.282 7936+0 records in 00:18:25.282 7936+0 records out 00:18:25.282 32505856 bytes (33 MB, 31 MiB) copied, 0.618078 s, 52.6 MB/s 00:18:25.282 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:25.283 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.283 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:25.283 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.283 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:25.283 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.283 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.543 [2024-09-28 16:19:39.970247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.543 16:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.543 [2024-09-28 16:19:40.002826] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.543 "name": "raid_bdev1", 00:18:25.543 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:25.543 "strip_size_kb": 0, 00:18:25.543 "state": "online", 00:18:25.543 "raid_level": "raid1", 00:18:25.543 "superblock": true, 00:18:25.543 "num_base_bdevs": 2, 00:18:25.543 "num_base_bdevs_discovered": 1, 00:18:25.543 "num_base_bdevs_operational": 1, 00:18:25.543 "base_bdevs_list": [ 00:18:25.543 { 00:18:25.543 "name": null, 00:18:25.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.543 "is_configured": false, 00:18:25.543 "data_offset": 0, 00:18:25.543 "data_size": 7936 00:18:25.543 }, 00:18:25.543 { 00:18:25.543 "name": "BaseBdev2", 00:18:25.543 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:25.543 "is_configured": true, 00:18:25.543 "data_offset": 256, 00:18:25.543 "data_size": 7936 00:18:25.543 } 00:18:25.543 ] 00:18:25.543 }' 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.543 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.803 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:25.803 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.803 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.803 [2024-09-28 16:19:40.434180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.803 [2024-09-28 16:19:40.447326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:25.803 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.803 [2024-09-28 16:19:40.449004] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.803 16:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.185 "name": "raid_bdev1", 00:18:27.185 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:27.185 "strip_size_kb": 0, 00:18:27.185 "state": "online", 00:18:27.185 "raid_level": "raid1", 00:18:27.185 "superblock": true, 00:18:27.185 "num_base_bdevs": 2, 00:18:27.185 "num_base_bdevs_discovered": 2, 00:18:27.185 "num_base_bdevs_operational": 2, 00:18:27.185 "process": { 00:18:27.185 "type": "rebuild", 00:18:27.185 "target": "spare", 00:18:27.185 "progress": { 00:18:27.185 "blocks": 2560, 00:18:27.185 "percent": 32 00:18:27.185 } 00:18:27.185 }, 00:18:27.185 "base_bdevs_list": [ 00:18:27.185 { 00:18:27.185 "name": "spare", 00:18:27.185 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:27.185 "is_configured": true, 00:18:27.185 "data_offset": 256, 00:18:27.185 "data_size": 7936 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "name": "BaseBdev2", 00:18:27.185 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:27.185 "is_configured": true, 00:18:27.185 "data_offset": 256, 00:18:27.185 "data_size": 7936 00:18:27.185 } 00:18:27.185 ] 00:18:27.185 }' 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.185 [2024-09-28 16:19:41.613486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.185 [2024-09-28 16:19:41.653939] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:27.185 [2024-09-28 16:19:41.653997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.185 [2024-09-28 16:19:41.654010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.185 [2024-09-28 16:19:41.654024] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.185 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.186 "name": "raid_bdev1", 00:18:27.186 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:27.186 "strip_size_kb": 0, 00:18:27.186 "state": "online", 00:18:27.186 "raid_level": "raid1", 00:18:27.186 "superblock": true, 00:18:27.186 "num_base_bdevs": 2, 00:18:27.186 "num_base_bdevs_discovered": 1, 00:18:27.186 "num_base_bdevs_operational": 1, 00:18:27.186 "base_bdevs_list": [ 00:18:27.186 { 00:18:27.186 "name": null, 00:18:27.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.186 "is_configured": false, 00:18:27.186 "data_offset": 0, 00:18:27.186 "data_size": 7936 00:18:27.186 }, 00:18:27.186 { 00:18:27.186 "name": "BaseBdev2", 00:18:27.186 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:27.186 "is_configured": true, 00:18:27.186 "data_offset": 256, 00:18:27.186 "data_size": 7936 00:18:27.186 } 00:18:27.186 ] 00:18:27.186 }' 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.186 16:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.756 "name": "raid_bdev1", 00:18:27.756 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:27.756 "strip_size_kb": 0, 00:18:27.756 "state": "online", 00:18:27.756 "raid_level": "raid1", 00:18:27.756 "superblock": true, 00:18:27.756 "num_base_bdevs": 2, 00:18:27.756 "num_base_bdevs_discovered": 1, 00:18:27.756 "num_base_bdevs_operational": 1, 00:18:27.756 "base_bdevs_list": [ 00:18:27.756 { 00:18:27.756 "name": null, 00:18:27.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.756 "is_configured": false, 00:18:27.756 "data_offset": 0, 00:18:27.756 "data_size": 7936 00:18:27.756 }, 00:18:27.756 { 00:18:27.756 "name": "BaseBdev2", 00:18:27.756 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:27.756 "is_configured": true, 00:18:27.756 "data_offset": 256, 00:18:27.756 "data_size": 7936 00:18:27.756 } 00:18:27.756 ] 00:18:27.756 }' 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.756 [2024-09-28 16:19:42.267876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.756 [2024-09-28 16:19:42.280659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.756 [2024-09-28 16:19:42.282363] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.756 16:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.696 "name": "raid_bdev1", 00:18:28.696 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:28.696 "strip_size_kb": 0, 00:18:28.696 "state": "online", 00:18:28.696 "raid_level": "raid1", 00:18:28.696 "superblock": true, 00:18:28.696 "num_base_bdevs": 2, 00:18:28.696 "num_base_bdevs_discovered": 2, 00:18:28.696 "num_base_bdevs_operational": 2, 00:18:28.696 "process": { 00:18:28.696 "type": "rebuild", 00:18:28.696 "target": "spare", 00:18:28.696 "progress": { 00:18:28.696 "blocks": 2560, 00:18:28.696 "percent": 32 00:18:28.696 } 00:18:28.696 }, 00:18:28.696 "base_bdevs_list": [ 00:18:28.696 { 00:18:28.696 "name": "spare", 00:18:28.696 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:28.696 "is_configured": true, 00:18:28.696 "data_offset": 256, 00:18:28.696 "data_size": 7936 00:18:28.696 }, 00:18:28.696 { 00:18:28.696 "name": "BaseBdev2", 00:18:28.696 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:28.696 "is_configured": true, 00:18:28.696 "data_offset": 256, 00:18:28.696 "data_size": 7936 00:18:28.696 } 00:18:28.696 ] 00:18:28.696 }' 00:18:28.696 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:28.956 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=716 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.956 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.956 "name": "raid_bdev1", 00:18:28.956 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:28.956 "strip_size_kb": 0, 00:18:28.956 "state": "online", 00:18:28.956 "raid_level": "raid1", 00:18:28.956 "superblock": true, 00:18:28.956 "num_base_bdevs": 2, 00:18:28.956 "num_base_bdevs_discovered": 2, 00:18:28.956 "num_base_bdevs_operational": 2, 00:18:28.956 "process": { 00:18:28.956 "type": "rebuild", 00:18:28.956 "target": "spare", 00:18:28.956 "progress": { 00:18:28.956 "blocks": 2816, 00:18:28.957 "percent": 35 00:18:28.957 } 00:18:28.957 }, 00:18:28.957 "base_bdevs_list": [ 00:18:28.957 { 00:18:28.957 "name": "spare", 00:18:28.957 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:28.957 "is_configured": true, 00:18:28.957 "data_offset": 256, 00:18:28.957 "data_size": 7936 00:18:28.957 }, 00:18:28.957 { 00:18:28.957 "name": "BaseBdev2", 00:18:28.957 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:28.957 "is_configured": true, 00:18:28.957 "data_offset": 256, 00:18:28.957 "data_size": 7936 00:18:28.957 } 00:18:28.957 ] 00:18:28.957 }' 00:18:28.957 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.957 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.957 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.957 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.957 16:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.896 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.896 "name": "raid_bdev1", 00:18:29.896 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:29.896 "strip_size_kb": 0, 00:18:29.896 "state": "online", 00:18:29.896 "raid_level": "raid1", 00:18:29.896 "superblock": true, 00:18:29.896 "num_base_bdevs": 2, 00:18:29.896 "num_base_bdevs_discovered": 2, 00:18:29.896 "num_base_bdevs_operational": 2, 00:18:29.896 "process": { 00:18:29.896 "type": "rebuild", 00:18:29.896 "target": "spare", 00:18:29.896 "progress": { 00:18:29.896 "blocks": 5632, 00:18:29.896 "percent": 70 00:18:29.896 } 00:18:29.896 }, 00:18:29.896 "base_bdevs_list": [ 00:18:29.896 { 00:18:29.896 "name": "spare", 00:18:29.896 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:29.896 "is_configured": true, 00:18:29.896 "data_offset": 256, 00:18:29.896 "data_size": 7936 00:18:29.896 }, 00:18:29.896 { 00:18:29.897 "name": "BaseBdev2", 00:18:29.897 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:29.897 "is_configured": true, 00:18:29.897 "data_offset": 256, 00:18:29.897 "data_size": 7936 00:18:29.897 } 00:18:29.897 ] 00:18:29.897 }' 00:18:29.897 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.156 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.156 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.156 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.156 16:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.727 [2024-09-28 16:19:45.394144] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:30.727 [2024-09-28 16:19:45.394302] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:30.727 [2024-09-28 16:19:45.394414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.987 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.248 "name": "raid_bdev1", 00:18:31.248 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:31.248 "strip_size_kb": 0, 00:18:31.248 "state": "online", 00:18:31.248 "raid_level": "raid1", 00:18:31.248 "superblock": true, 00:18:31.248 "num_base_bdevs": 2, 00:18:31.248 "num_base_bdevs_discovered": 2, 00:18:31.248 "num_base_bdevs_operational": 2, 00:18:31.248 "base_bdevs_list": [ 00:18:31.248 { 00:18:31.248 "name": "spare", 00:18:31.248 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:31.248 "is_configured": true, 00:18:31.248 "data_offset": 256, 00:18:31.248 "data_size": 7936 00:18:31.248 }, 00:18:31.248 { 00:18:31.248 "name": "BaseBdev2", 00:18:31.248 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:31.248 "is_configured": true, 00:18:31.248 "data_offset": 256, 00:18:31.248 "data_size": 7936 00:18:31.248 } 00:18:31.248 ] 00:18:31.248 }' 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.248 "name": "raid_bdev1", 00:18:31.248 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:31.248 "strip_size_kb": 0, 00:18:31.248 "state": "online", 00:18:31.248 "raid_level": "raid1", 00:18:31.248 "superblock": true, 00:18:31.248 "num_base_bdevs": 2, 00:18:31.248 "num_base_bdevs_discovered": 2, 00:18:31.248 "num_base_bdevs_operational": 2, 00:18:31.248 "base_bdevs_list": [ 00:18:31.248 { 00:18:31.248 "name": "spare", 00:18:31.248 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:31.248 "is_configured": true, 00:18:31.248 "data_offset": 256, 00:18:31.248 "data_size": 7936 00:18:31.248 }, 00:18:31.248 { 00:18:31.248 "name": "BaseBdev2", 00:18:31.248 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:31.248 "is_configured": true, 00:18:31.248 "data_offset": 256, 00:18:31.248 "data_size": 7936 00:18:31.248 } 00:18:31.248 ] 00:18:31.248 }' 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.248 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.509 16:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.509 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.509 "name": "raid_bdev1", 00:18:31.509 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:31.509 "strip_size_kb": 0, 00:18:31.509 "state": "online", 00:18:31.509 "raid_level": "raid1", 00:18:31.509 "superblock": true, 00:18:31.509 "num_base_bdevs": 2, 00:18:31.509 "num_base_bdevs_discovered": 2, 00:18:31.509 "num_base_bdevs_operational": 2, 00:18:31.509 "base_bdevs_list": [ 00:18:31.509 { 00:18:31.509 "name": "spare", 00:18:31.509 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:31.509 "is_configured": true, 00:18:31.509 "data_offset": 256, 00:18:31.509 "data_size": 7936 00:18:31.509 }, 00:18:31.509 { 00:18:31.509 "name": "BaseBdev2", 00:18:31.509 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:31.509 "is_configured": true, 00:18:31.509 "data_offset": 256, 00:18:31.509 "data_size": 7936 00:18:31.509 } 00:18:31.509 ] 00:18:31.509 }' 00:18:31.509 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.509 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.770 [2024-09-28 16:19:46.411873] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.770 [2024-09-28 16:19:46.411951] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.770 [2024-09-28 16:19:46.412031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.770 [2024-09-28 16:19:46.412101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.770 [2024-09-28 16:19:46.412176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.770 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:32.031 /dev/nbd0 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:32.031 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.031 1+0 records in 00:18:32.031 1+0 records out 00:18:32.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371216 s, 11.0 MB/s 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.032 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:32.291 /dev/nbd1 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.291 1+0 records in 00:18:32.291 1+0 records out 00:18:32.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372858 s, 11.0 MB/s 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.291 16:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:32.551 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:32.551 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:32.551 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:32.551 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:32.551 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:32.551 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.551 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.810 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.069 [2024-09-28 16:19:47.569383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.069 [2024-09-28 16:19:47.569471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.069 [2024-09-28 16:19:47.569507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:33.069 [2024-09-28 16:19:47.569535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.069 [2024-09-28 16:19:47.571432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.069 [2024-09-28 16:19:47.571518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.069 [2024-09-28 16:19:47.571602] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:33.069 [2024-09-28 16:19:47.571679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.069 [2024-09-28 16:19:47.571882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.069 spare 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.069 [2024-09-28 16:19:47.671803] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:33.069 [2024-09-28 16:19:47.671830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.069 [2024-09-28 16:19:47.671919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:33.069 [2024-09-28 16:19:47.672035] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:33.069 [2024-09-28 16:19:47.672044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:33.069 [2024-09-28 16:19:47.672162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.069 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.070 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.070 "name": "raid_bdev1", 00:18:33.070 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:33.070 "strip_size_kb": 0, 00:18:33.070 "state": "online", 00:18:33.070 "raid_level": "raid1", 00:18:33.070 "superblock": true, 00:18:33.070 "num_base_bdevs": 2, 00:18:33.070 "num_base_bdevs_discovered": 2, 00:18:33.070 "num_base_bdevs_operational": 2, 00:18:33.070 "base_bdevs_list": [ 00:18:33.070 { 00:18:33.070 "name": "spare", 00:18:33.070 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:33.070 "is_configured": true, 00:18:33.070 "data_offset": 256, 00:18:33.070 "data_size": 7936 00:18:33.070 }, 00:18:33.070 { 00:18:33.070 "name": "BaseBdev2", 00:18:33.070 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:33.070 "is_configured": true, 00:18:33.070 "data_offset": 256, 00:18:33.070 "data_size": 7936 00:18:33.070 } 00:18:33.070 ] 00:18:33.070 }' 00:18:33.070 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.070 16:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.638 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.638 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.638 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.638 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.638 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.638 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.638 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.639 "name": "raid_bdev1", 00:18:33.639 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:33.639 "strip_size_kb": 0, 00:18:33.639 "state": "online", 00:18:33.639 "raid_level": "raid1", 00:18:33.639 "superblock": true, 00:18:33.639 "num_base_bdevs": 2, 00:18:33.639 "num_base_bdevs_discovered": 2, 00:18:33.639 "num_base_bdevs_operational": 2, 00:18:33.639 "base_bdevs_list": [ 00:18:33.639 { 00:18:33.639 "name": "spare", 00:18:33.639 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:33.639 "is_configured": true, 00:18:33.639 "data_offset": 256, 00:18:33.639 "data_size": 7936 00:18:33.639 }, 00:18:33.639 { 00:18:33.639 "name": "BaseBdev2", 00:18:33.639 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:33.639 "is_configured": true, 00:18:33.639 "data_offset": 256, 00:18:33.639 "data_size": 7936 00:18:33.639 } 00:18:33.639 ] 00:18:33.639 }' 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.639 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.898 [2024-09-28 16:19:48.324123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.898 "name": "raid_bdev1", 00:18:33.898 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:33.898 "strip_size_kb": 0, 00:18:33.898 "state": "online", 00:18:33.898 "raid_level": "raid1", 00:18:33.898 "superblock": true, 00:18:33.898 "num_base_bdevs": 2, 00:18:33.898 "num_base_bdevs_discovered": 1, 00:18:33.898 "num_base_bdevs_operational": 1, 00:18:33.898 "base_bdevs_list": [ 00:18:33.898 { 00:18:33.898 "name": null, 00:18:33.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.898 "is_configured": false, 00:18:33.898 "data_offset": 0, 00:18:33.898 "data_size": 7936 00:18:33.898 }, 00:18:33.898 { 00:18:33.898 "name": "BaseBdev2", 00:18:33.898 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:33.898 "is_configured": true, 00:18:33.898 "data_offset": 256, 00:18:33.898 "data_size": 7936 00:18:33.898 } 00:18:33.898 ] 00:18:33.898 }' 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.898 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.158 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.158 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.158 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.158 [2024-09-28 16:19:48.803385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.158 [2024-09-28 16:19:48.803560] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.158 [2024-09-28 16:19:48.803621] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:34.158 [2024-09-28 16:19:48.803680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.158 [2024-09-28 16:19:48.816495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:34.158 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.158 16:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:34.158 [2024-09-28 16:19:48.818207] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.539 "name": "raid_bdev1", 00:18:35.539 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:35.539 "strip_size_kb": 0, 00:18:35.539 "state": "online", 00:18:35.539 "raid_level": "raid1", 00:18:35.539 "superblock": true, 00:18:35.539 "num_base_bdevs": 2, 00:18:35.539 "num_base_bdevs_discovered": 2, 00:18:35.539 "num_base_bdevs_operational": 2, 00:18:35.539 "process": { 00:18:35.539 "type": "rebuild", 00:18:35.539 "target": "spare", 00:18:35.539 "progress": { 00:18:35.539 "blocks": 2560, 00:18:35.539 "percent": 32 00:18:35.539 } 00:18:35.539 }, 00:18:35.539 "base_bdevs_list": [ 00:18:35.539 { 00:18:35.539 "name": "spare", 00:18:35.539 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:35.539 "is_configured": true, 00:18:35.539 "data_offset": 256, 00:18:35.539 "data_size": 7936 00:18:35.539 }, 00:18:35.539 { 00:18:35.539 "name": "BaseBdev2", 00:18:35.539 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:35.539 "is_configured": true, 00:18:35.539 "data_offset": 256, 00:18:35.539 "data_size": 7936 00:18:35.539 } 00:18:35.539 ] 00:18:35.539 }' 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.539 16:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.539 [2024-09-28 16:19:49.978639] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.539 [2024-09-28 16:19:50.023019] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:35.539 [2024-09-28 16:19:50.023091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.539 [2024-09-28 16:19:50.023104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.539 [2024-09-28 16:19:50.023113] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.539 "name": "raid_bdev1", 00:18:35.539 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:35.539 "strip_size_kb": 0, 00:18:35.539 "state": "online", 00:18:35.539 "raid_level": "raid1", 00:18:35.539 "superblock": true, 00:18:35.539 "num_base_bdevs": 2, 00:18:35.539 "num_base_bdevs_discovered": 1, 00:18:35.539 "num_base_bdevs_operational": 1, 00:18:35.539 "base_bdevs_list": [ 00:18:35.539 { 00:18:35.539 "name": null, 00:18:35.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.539 "is_configured": false, 00:18:35.539 "data_offset": 0, 00:18:35.539 "data_size": 7936 00:18:35.539 }, 00:18:35.539 { 00:18:35.539 "name": "BaseBdev2", 00:18:35.539 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:35.539 "is_configured": true, 00:18:35.539 "data_offset": 256, 00:18:35.539 "data_size": 7936 00:18:35.539 } 00:18:35.539 ] 00:18:35.539 }' 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.539 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.109 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:36.109 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.109 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.109 [2024-09-28 16:19:50.508520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.109 [2024-09-28 16:19:50.508577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.109 [2024-09-28 16:19:50.508603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:36.109 [2024-09-28 16:19:50.508615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.109 [2024-09-28 16:19:50.508848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.109 [2024-09-28 16:19:50.508867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.109 [2024-09-28 16:19:50.508914] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:36.109 [2024-09-28 16:19:50.508926] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.109 [2024-09-28 16:19:50.508934] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:36.109 [2024-09-28 16:19:50.508957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.109 [2024-09-28 16:19:50.522537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:36.109 spare 00:18:36.109 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.109 16:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:36.109 [2024-09-28 16:19:50.524402] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.047 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.047 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.047 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.047 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.047 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.048 "name": "raid_bdev1", 00:18:37.048 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:37.048 "strip_size_kb": 0, 00:18:37.048 "state": "online", 00:18:37.048 "raid_level": "raid1", 00:18:37.048 "superblock": true, 00:18:37.048 "num_base_bdevs": 2, 00:18:37.048 "num_base_bdevs_discovered": 2, 00:18:37.048 "num_base_bdevs_operational": 2, 00:18:37.048 "process": { 00:18:37.048 "type": "rebuild", 00:18:37.048 "target": "spare", 00:18:37.048 "progress": { 00:18:37.048 "blocks": 2560, 00:18:37.048 "percent": 32 00:18:37.048 } 00:18:37.048 }, 00:18:37.048 "base_bdevs_list": [ 00:18:37.048 { 00:18:37.048 "name": "spare", 00:18:37.048 "uuid": "f6e5432e-cddb-5e98-b40d-77a2ac852be0", 00:18:37.048 "is_configured": true, 00:18:37.048 "data_offset": 256, 00:18:37.048 "data_size": 7936 00:18:37.048 }, 00:18:37.048 { 00:18:37.048 "name": "BaseBdev2", 00:18:37.048 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:37.048 "is_configured": true, 00:18:37.048 "data_offset": 256, 00:18:37.048 "data_size": 7936 00:18:37.048 } 00:18:37.048 ] 00:18:37.048 }' 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.048 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 [2024-09-28 16:19:51.688605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.048 [2024-09-28 16:19:51.728996] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.048 [2024-09-28 16:19:51.729049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.048 [2024-09-28 16:19:51.729066] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.048 [2024-09-28 16:19:51.729072] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.307 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.307 "name": "raid_bdev1", 00:18:37.307 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:37.308 "strip_size_kb": 0, 00:18:37.308 "state": "online", 00:18:37.308 "raid_level": "raid1", 00:18:37.308 "superblock": true, 00:18:37.308 "num_base_bdevs": 2, 00:18:37.308 "num_base_bdevs_discovered": 1, 00:18:37.308 "num_base_bdevs_operational": 1, 00:18:37.308 "base_bdevs_list": [ 00:18:37.308 { 00:18:37.308 "name": null, 00:18:37.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.308 "is_configured": false, 00:18:37.308 "data_offset": 0, 00:18:37.308 "data_size": 7936 00:18:37.308 }, 00:18:37.308 { 00:18:37.308 "name": "BaseBdev2", 00:18:37.308 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:37.308 "is_configured": true, 00:18:37.308 "data_offset": 256, 00:18:37.308 "data_size": 7936 00:18:37.308 } 00:18:37.308 ] 00:18:37.308 }' 00:18:37.308 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.308 16:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.567 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.567 "name": "raid_bdev1", 00:18:37.567 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:37.567 "strip_size_kb": 0, 00:18:37.567 "state": "online", 00:18:37.567 "raid_level": "raid1", 00:18:37.567 "superblock": true, 00:18:37.567 "num_base_bdevs": 2, 00:18:37.567 "num_base_bdevs_discovered": 1, 00:18:37.568 "num_base_bdevs_operational": 1, 00:18:37.568 "base_bdevs_list": [ 00:18:37.568 { 00:18:37.568 "name": null, 00:18:37.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.568 "is_configured": false, 00:18:37.568 "data_offset": 0, 00:18:37.568 "data_size": 7936 00:18:37.568 }, 00:18:37.568 { 00:18:37.568 "name": "BaseBdev2", 00:18:37.568 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:37.568 "is_configured": true, 00:18:37.568 "data_offset": 256, 00:18:37.568 "data_size": 7936 00:18:37.568 } 00:18:37.568 ] 00:18:37.568 }' 00:18:37.568 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.568 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.568 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.828 [2024-09-28 16:19:52.291764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:37.828 [2024-09-28 16:19:52.291861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.828 [2024-09-28 16:19:52.291889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:37.828 [2024-09-28 16:19:52.291899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.828 [2024-09-28 16:19:52.292093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.828 [2024-09-28 16:19:52.292106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:37.828 [2024-09-28 16:19:52.292149] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:37.828 [2024-09-28 16:19:52.292160] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:37.828 [2024-09-28 16:19:52.292174] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:37.828 [2024-09-28 16:19:52.292184] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:37.828 BaseBdev1 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.828 16:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.768 "name": "raid_bdev1", 00:18:38.768 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:38.768 "strip_size_kb": 0, 00:18:38.768 "state": "online", 00:18:38.768 "raid_level": "raid1", 00:18:38.768 "superblock": true, 00:18:38.768 "num_base_bdevs": 2, 00:18:38.768 "num_base_bdevs_discovered": 1, 00:18:38.768 "num_base_bdevs_operational": 1, 00:18:38.768 "base_bdevs_list": [ 00:18:38.768 { 00:18:38.768 "name": null, 00:18:38.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.768 "is_configured": false, 00:18:38.768 "data_offset": 0, 00:18:38.768 "data_size": 7936 00:18:38.768 }, 00:18:38.768 { 00:18:38.768 "name": "BaseBdev2", 00:18:38.768 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:38.768 "is_configured": true, 00:18:38.768 "data_offset": 256, 00:18:38.768 "data_size": 7936 00:18:38.768 } 00:18:38.768 ] 00:18:38.768 }' 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.768 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.338 "name": "raid_bdev1", 00:18:39.338 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:39.338 "strip_size_kb": 0, 00:18:39.338 "state": "online", 00:18:39.338 "raid_level": "raid1", 00:18:39.338 "superblock": true, 00:18:39.338 "num_base_bdevs": 2, 00:18:39.338 "num_base_bdevs_discovered": 1, 00:18:39.338 "num_base_bdevs_operational": 1, 00:18:39.338 "base_bdevs_list": [ 00:18:39.338 { 00:18:39.338 "name": null, 00:18:39.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.338 "is_configured": false, 00:18:39.338 "data_offset": 0, 00:18:39.338 "data_size": 7936 00:18:39.338 }, 00:18:39.338 { 00:18:39.338 "name": "BaseBdev2", 00:18:39.338 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:39.338 "is_configured": true, 00:18:39.338 "data_offset": 256, 00:18:39.338 "data_size": 7936 00:18:39.338 } 00:18:39.338 ] 00:18:39.338 }' 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 [2024-09-28 16:19:53.925162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.338 [2024-09-28 16:19:53.925284] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.338 [2024-09-28 16:19:53.925315] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:39.338 request: 00:18:39.338 { 00:18:39.338 "base_bdev": "BaseBdev1", 00:18:39.338 "raid_bdev": "raid_bdev1", 00:18:39.338 "method": "bdev_raid_add_base_bdev", 00:18:39.338 "req_id": 1 00:18:39.338 } 00:18:39.338 Got JSON-RPC error response 00:18:39.338 response: 00:18:39.338 { 00:18:39.338 "code": -22, 00:18:39.338 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:39.338 } 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.338 16:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.277 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.537 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.537 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.537 "name": "raid_bdev1", 00:18:40.537 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:40.537 "strip_size_kb": 0, 00:18:40.537 "state": "online", 00:18:40.537 "raid_level": "raid1", 00:18:40.537 "superblock": true, 00:18:40.537 "num_base_bdevs": 2, 00:18:40.537 "num_base_bdevs_discovered": 1, 00:18:40.537 "num_base_bdevs_operational": 1, 00:18:40.537 "base_bdevs_list": [ 00:18:40.537 { 00:18:40.537 "name": null, 00:18:40.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.537 "is_configured": false, 00:18:40.537 "data_offset": 0, 00:18:40.537 "data_size": 7936 00:18:40.537 }, 00:18:40.537 { 00:18:40.537 "name": "BaseBdev2", 00:18:40.537 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:40.537 "is_configured": true, 00:18:40.537 "data_offset": 256, 00:18:40.537 "data_size": 7936 00:18:40.537 } 00:18:40.537 ] 00:18:40.537 }' 00:18:40.537 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.537 16:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.796 "name": "raid_bdev1", 00:18:40.796 "uuid": "ef7b568c-3cce-41e3-b749-9f9fef6f079e", 00:18:40.796 "strip_size_kb": 0, 00:18:40.796 "state": "online", 00:18:40.796 "raid_level": "raid1", 00:18:40.796 "superblock": true, 00:18:40.796 "num_base_bdevs": 2, 00:18:40.796 "num_base_bdevs_discovered": 1, 00:18:40.796 "num_base_bdevs_operational": 1, 00:18:40.796 "base_bdevs_list": [ 00:18:40.796 { 00:18:40.796 "name": null, 00:18:40.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.796 "is_configured": false, 00:18:40.796 "data_offset": 0, 00:18:40.796 "data_size": 7936 00:18:40.796 }, 00:18:40.796 { 00:18:40.796 "name": "BaseBdev2", 00:18:40.796 "uuid": "64508ccd-9734-5872-9e91-f9b5dd4f8922", 00:18:40.796 "is_configured": true, 00:18:40.796 "data_offset": 256, 00:18:40.796 "data_size": 7936 00:18:40.796 } 00:18:40.796 ] 00:18:40.796 }' 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87781 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87781 ']' 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87781 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.796 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87781 00:18:41.056 killing process with pid 87781 00:18:41.056 Received shutdown signal, test time was about 60.000000 seconds 00:18:41.056 00:18:41.056 Latency(us) 00:18:41.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.056 =================================================================================================================== 00:18:41.056 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.056 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.056 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.056 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87781' 00:18:41.056 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87781 00:18:41.056 [2024-09-28 16:19:55.493984] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.056 [2024-09-28 16:19:55.494071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.056 [2024-09-28 16:19:55.494115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.056 [2024-09-28 16:19:55.494125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:41.056 16:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87781 00:18:41.316 [2024-09-28 16:19:55.796025] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.717 16:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:42.717 00:18:42.717 real 0m19.715s 00:18:42.717 user 0m25.682s 00:18:42.717 sys 0m2.648s 00:18:42.717 16:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.717 ************************************ 00:18:42.717 END TEST raid_rebuild_test_sb_md_separate 00:18:42.717 ************************************ 00:18:42.717 16:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.717 16:19:57 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:42.717 16:19:57 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:42.717 16:19:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:42.717 16:19:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.717 16:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.717 ************************************ 00:18:42.717 START TEST raid_state_function_test_sb_md_interleaved 00:18:42.717 ************************************ 00:18:42.717 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:42.717 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:42.717 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:42.717 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:42.717 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88468 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:42.718 Process raid pid: 88468 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88468' 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88468 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88468 ']' 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.718 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 [2024-09-28 16:19:57.156246] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:42.718 [2024-09-28 16:19:57.156501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.718 [2024-09-28 16:19:57.328641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.009 [2024-09-28 16:19:57.529391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.289 [2024-09-28 16:19:57.725241] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.289 [2024-09-28 16:19:57.725352] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.289 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.289 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:43.289 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:43.289 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.289 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.549 [2024-09-28 16:19:57.974809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.549 [2024-09-28 16:19:57.974864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.549 [2024-09-28 16:19:57.974873] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.549 [2024-09-28 16:19:57.974882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.549 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.549 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:43.549 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.549 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.549 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.549 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.549 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.550 16:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.550 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.550 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.550 "name": "Existed_Raid", 00:18:43.550 "uuid": "00c60f8c-cd40-4083-8ddb-c4c3ac5502f5", 00:18:43.550 "strip_size_kb": 0, 00:18:43.550 "state": "configuring", 00:18:43.550 "raid_level": "raid1", 00:18:43.550 "superblock": true, 00:18:43.550 "num_base_bdevs": 2, 00:18:43.550 "num_base_bdevs_discovered": 0, 00:18:43.550 "num_base_bdevs_operational": 2, 00:18:43.550 "base_bdevs_list": [ 00:18:43.550 { 00:18:43.550 "name": "BaseBdev1", 00:18:43.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.550 "is_configured": false, 00:18:43.550 "data_offset": 0, 00:18:43.550 "data_size": 0 00:18:43.550 }, 00:18:43.550 { 00:18:43.550 "name": "BaseBdev2", 00:18:43.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.550 "is_configured": false, 00:18:43.550 "data_offset": 0, 00:18:43.550 "data_size": 0 00:18:43.550 } 00:18:43.550 ] 00:18:43.550 }' 00:18:43.550 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.550 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.810 [2024-09-28 16:19:58.461909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:43.810 [2024-09-28 16:19:58.461991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.810 [2024-09-28 16:19:58.469918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.810 [2024-09-28 16:19:58.469990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.810 [2024-09-28 16:19:58.470015] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.810 [2024-09-28 16:19:58.470038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.810 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 [2024-09-28 16:19:58.522729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.071 BaseBdev1 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 [ 00:18:44.071 { 00:18:44.071 "name": "BaseBdev1", 00:18:44.071 "aliases": [ 00:18:44.071 "8c1bf75f-20b8-4f83-942a-61ece07092a8" 00:18:44.071 ], 00:18:44.071 "product_name": "Malloc disk", 00:18:44.071 "block_size": 4128, 00:18:44.071 "num_blocks": 8192, 00:18:44.071 "uuid": "8c1bf75f-20b8-4f83-942a-61ece07092a8", 00:18:44.071 "md_size": 32, 00:18:44.071 "md_interleave": true, 00:18:44.071 "dif_type": 0, 00:18:44.071 "assigned_rate_limits": { 00:18:44.071 "rw_ios_per_sec": 0, 00:18:44.071 "rw_mbytes_per_sec": 0, 00:18:44.071 "r_mbytes_per_sec": 0, 00:18:44.071 "w_mbytes_per_sec": 0 00:18:44.071 }, 00:18:44.071 "claimed": true, 00:18:44.071 "claim_type": "exclusive_write", 00:18:44.071 "zoned": false, 00:18:44.071 "supported_io_types": { 00:18:44.071 "read": true, 00:18:44.071 "write": true, 00:18:44.071 "unmap": true, 00:18:44.071 "flush": true, 00:18:44.071 "reset": true, 00:18:44.071 "nvme_admin": false, 00:18:44.071 "nvme_io": false, 00:18:44.071 "nvme_io_md": false, 00:18:44.071 "write_zeroes": true, 00:18:44.071 "zcopy": true, 00:18:44.071 "get_zone_info": false, 00:18:44.071 "zone_management": false, 00:18:44.071 "zone_append": false, 00:18:44.071 "compare": false, 00:18:44.071 "compare_and_write": false, 00:18:44.071 "abort": true, 00:18:44.071 "seek_hole": false, 00:18:44.071 "seek_data": false, 00:18:44.071 "copy": true, 00:18:44.071 "nvme_iov_md": false 00:18:44.071 }, 00:18:44.071 "memory_domains": [ 00:18:44.071 { 00:18:44.071 "dma_device_id": "system", 00:18:44.071 "dma_device_type": 1 00:18:44.071 }, 00:18:44.071 { 00:18:44.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.071 "dma_device_type": 2 00:18:44.071 } 00:18:44.071 ], 00:18:44.071 "driver_specific": {} 00:18:44.071 } 00:18:44.071 ] 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.071 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.071 "name": "Existed_Raid", 00:18:44.071 "uuid": "84a19b36-1be7-49d4-94bb-023b60164961", 00:18:44.071 "strip_size_kb": 0, 00:18:44.071 "state": "configuring", 00:18:44.071 "raid_level": "raid1", 00:18:44.071 "superblock": true, 00:18:44.071 "num_base_bdevs": 2, 00:18:44.071 "num_base_bdevs_discovered": 1, 00:18:44.072 "num_base_bdevs_operational": 2, 00:18:44.072 "base_bdevs_list": [ 00:18:44.072 { 00:18:44.072 "name": "BaseBdev1", 00:18:44.072 "uuid": "8c1bf75f-20b8-4f83-942a-61ece07092a8", 00:18:44.072 "is_configured": true, 00:18:44.072 "data_offset": 256, 00:18:44.072 "data_size": 7936 00:18:44.072 }, 00:18:44.072 { 00:18:44.072 "name": "BaseBdev2", 00:18:44.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.072 "is_configured": false, 00:18:44.072 "data_offset": 0, 00:18:44.072 "data_size": 0 00:18:44.072 } 00:18:44.072 ] 00:18:44.072 }' 00:18:44.072 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.072 16:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.642 [2024-09-28 16:19:59.029891] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.642 [2024-09-28 16:19:59.029967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.642 [2024-09-28 16:19:59.041921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.642 [2024-09-28 16:19:59.043575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.642 [2024-09-28 16:19:59.043652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.642 "name": "Existed_Raid", 00:18:44.642 "uuid": "a1148e8c-164c-4646-a881-99c051cf7301", 00:18:44.642 "strip_size_kb": 0, 00:18:44.642 "state": "configuring", 00:18:44.642 "raid_level": "raid1", 00:18:44.642 "superblock": true, 00:18:44.642 "num_base_bdevs": 2, 00:18:44.642 "num_base_bdevs_discovered": 1, 00:18:44.642 "num_base_bdevs_operational": 2, 00:18:44.642 "base_bdevs_list": [ 00:18:44.642 { 00:18:44.642 "name": "BaseBdev1", 00:18:44.642 "uuid": "8c1bf75f-20b8-4f83-942a-61ece07092a8", 00:18:44.642 "is_configured": true, 00:18:44.642 "data_offset": 256, 00:18:44.642 "data_size": 7936 00:18:44.642 }, 00:18:44.642 { 00:18:44.642 "name": "BaseBdev2", 00:18:44.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.642 "is_configured": false, 00:18:44.642 "data_offset": 0, 00:18:44.642 "data_size": 0 00:18:44.642 } 00:18:44.642 ] 00:18:44.642 }' 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.642 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.903 [2024-09-28 16:19:59.518714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.903 [2024-09-28 16:19:59.518908] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:44.903 [2024-09-28 16:19:59.518921] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:44.903 [2024-09-28 16:19:59.519012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:44.903 [2024-09-28 16:19:59.519079] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:44.903 [2024-09-28 16:19:59.519089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:44.903 [2024-09-28 16:19:59.519140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.903 BaseBdev2 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.903 [ 00:18:44.903 { 00:18:44.903 "name": "BaseBdev2", 00:18:44.903 "aliases": [ 00:18:44.903 "e64354db-5553-4de5-849c-98d226d0c0e4" 00:18:44.903 ], 00:18:44.903 "product_name": "Malloc disk", 00:18:44.903 "block_size": 4128, 00:18:44.903 "num_blocks": 8192, 00:18:44.903 "uuid": "e64354db-5553-4de5-849c-98d226d0c0e4", 00:18:44.903 "md_size": 32, 00:18:44.903 "md_interleave": true, 00:18:44.903 "dif_type": 0, 00:18:44.903 "assigned_rate_limits": { 00:18:44.903 "rw_ios_per_sec": 0, 00:18:44.903 "rw_mbytes_per_sec": 0, 00:18:44.903 "r_mbytes_per_sec": 0, 00:18:44.903 "w_mbytes_per_sec": 0 00:18:44.903 }, 00:18:44.903 "claimed": true, 00:18:44.903 "claim_type": "exclusive_write", 00:18:44.903 "zoned": false, 00:18:44.903 "supported_io_types": { 00:18:44.903 "read": true, 00:18:44.903 "write": true, 00:18:44.903 "unmap": true, 00:18:44.903 "flush": true, 00:18:44.903 "reset": true, 00:18:44.903 "nvme_admin": false, 00:18:44.903 "nvme_io": false, 00:18:44.903 "nvme_io_md": false, 00:18:44.903 "write_zeroes": true, 00:18:44.903 "zcopy": true, 00:18:44.903 "get_zone_info": false, 00:18:44.903 "zone_management": false, 00:18:44.903 "zone_append": false, 00:18:44.903 "compare": false, 00:18:44.903 "compare_and_write": false, 00:18:44.903 "abort": true, 00:18:44.903 "seek_hole": false, 00:18:44.903 "seek_data": false, 00:18:44.903 "copy": true, 00:18:44.903 "nvme_iov_md": false 00:18:44.903 }, 00:18:44.903 "memory_domains": [ 00:18:44.903 { 00:18:44.903 "dma_device_id": "system", 00:18:44.903 "dma_device_type": 1 00:18:44.903 }, 00:18:44.903 { 00:18:44.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.903 "dma_device_type": 2 00:18:44.903 } 00:18:44.903 ], 00:18:44.903 "driver_specific": {} 00:18:44.903 } 00:18:44.903 ] 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.903 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.904 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.904 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.904 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.904 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.904 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.164 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.164 "name": "Existed_Raid", 00:18:45.164 "uuid": "a1148e8c-164c-4646-a881-99c051cf7301", 00:18:45.164 "strip_size_kb": 0, 00:18:45.164 "state": "online", 00:18:45.164 "raid_level": "raid1", 00:18:45.164 "superblock": true, 00:18:45.164 "num_base_bdevs": 2, 00:18:45.164 "num_base_bdevs_discovered": 2, 00:18:45.164 "num_base_bdevs_operational": 2, 00:18:45.164 "base_bdevs_list": [ 00:18:45.164 { 00:18:45.164 "name": "BaseBdev1", 00:18:45.164 "uuid": "8c1bf75f-20b8-4f83-942a-61ece07092a8", 00:18:45.164 "is_configured": true, 00:18:45.164 "data_offset": 256, 00:18:45.164 "data_size": 7936 00:18:45.164 }, 00:18:45.164 { 00:18:45.164 "name": "BaseBdev2", 00:18:45.164 "uuid": "e64354db-5553-4de5-849c-98d226d0c0e4", 00:18:45.164 "is_configured": true, 00:18:45.164 "data_offset": 256, 00:18:45.164 "data_size": 7936 00:18:45.164 } 00:18:45.164 ] 00:18:45.164 }' 00:18:45.164 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.164 16:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.424 [2024-09-28 16:20:00.046080] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.424 "name": "Existed_Raid", 00:18:45.424 "aliases": [ 00:18:45.424 "a1148e8c-164c-4646-a881-99c051cf7301" 00:18:45.424 ], 00:18:45.424 "product_name": "Raid Volume", 00:18:45.424 "block_size": 4128, 00:18:45.424 "num_blocks": 7936, 00:18:45.424 "uuid": "a1148e8c-164c-4646-a881-99c051cf7301", 00:18:45.424 "md_size": 32, 00:18:45.424 "md_interleave": true, 00:18:45.424 "dif_type": 0, 00:18:45.424 "assigned_rate_limits": { 00:18:45.424 "rw_ios_per_sec": 0, 00:18:45.424 "rw_mbytes_per_sec": 0, 00:18:45.424 "r_mbytes_per_sec": 0, 00:18:45.424 "w_mbytes_per_sec": 0 00:18:45.424 }, 00:18:45.424 "claimed": false, 00:18:45.424 "zoned": false, 00:18:45.424 "supported_io_types": { 00:18:45.424 "read": true, 00:18:45.424 "write": true, 00:18:45.424 "unmap": false, 00:18:45.424 "flush": false, 00:18:45.424 "reset": true, 00:18:45.424 "nvme_admin": false, 00:18:45.424 "nvme_io": false, 00:18:45.424 "nvme_io_md": false, 00:18:45.424 "write_zeroes": true, 00:18:45.424 "zcopy": false, 00:18:45.424 "get_zone_info": false, 00:18:45.424 "zone_management": false, 00:18:45.424 "zone_append": false, 00:18:45.424 "compare": false, 00:18:45.424 "compare_and_write": false, 00:18:45.424 "abort": false, 00:18:45.424 "seek_hole": false, 00:18:45.424 "seek_data": false, 00:18:45.424 "copy": false, 00:18:45.424 "nvme_iov_md": false 00:18:45.424 }, 00:18:45.424 "memory_domains": [ 00:18:45.424 { 00:18:45.424 "dma_device_id": "system", 00:18:45.424 "dma_device_type": 1 00:18:45.424 }, 00:18:45.424 { 00:18:45.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.424 "dma_device_type": 2 00:18:45.424 }, 00:18:45.424 { 00:18:45.424 "dma_device_id": "system", 00:18:45.424 "dma_device_type": 1 00:18:45.424 }, 00:18:45.424 { 00:18:45.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.424 "dma_device_type": 2 00:18:45.424 } 00:18:45.424 ], 00:18:45.424 "driver_specific": { 00:18:45.424 "raid": { 00:18:45.424 "uuid": "a1148e8c-164c-4646-a881-99c051cf7301", 00:18:45.424 "strip_size_kb": 0, 00:18:45.424 "state": "online", 00:18:45.424 "raid_level": "raid1", 00:18:45.424 "superblock": true, 00:18:45.424 "num_base_bdevs": 2, 00:18:45.424 "num_base_bdevs_discovered": 2, 00:18:45.424 "num_base_bdevs_operational": 2, 00:18:45.424 "base_bdevs_list": [ 00:18:45.424 { 00:18:45.424 "name": "BaseBdev1", 00:18:45.424 "uuid": "8c1bf75f-20b8-4f83-942a-61ece07092a8", 00:18:45.424 "is_configured": true, 00:18:45.424 "data_offset": 256, 00:18:45.424 "data_size": 7936 00:18:45.424 }, 00:18:45.424 { 00:18:45.424 "name": "BaseBdev2", 00:18:45.424 "uuid": "e64354db-5553-4de5-849c-98d226d0c0e4", 00:18:45.424 "is_configured": true, 00:18:45.424 "data_offset": 256, 00:18:45.424 "data_size": 7936 00:18:45.424 } 00:18:45.424 ] 00:18:45.424 } 00:18:45.424 } 00:18:45.424 }' 00:18:45.424 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:45.685 BaseBdev2' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.685 [2024-09-28 16:20:00.253527] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:45.685 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.686 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.686 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.686 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.686 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.686 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.686 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.686 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.946 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.946 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.946 "name": "Existed_Raid", 00:18:45.946 "uuid": "a1148e8c-164c-4646-a881-99c051cf7301", 00:18:45.946 "strip_size_kb": 0, 00:18:45.946 "state": "online", 00:18:45.946 "raid_level": "raid1", 00:18:45.946 "superblock": true, 00:18:45.946 "num_base_bdevs": 2, 00:18:45.946 "num_base_bdevs_discovered": 1, 00:18:45.946 "num_base_bdevs_operational": 1, 00:18:45.946 "base_bdevs_list": [ 00:18:45.946 { 00:18:45.946 "name": null, 00:18:45.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.946 "is_configured": false, 00:18:45.946 "data_offset": 0, 00:18:45.946 "data_size": 7936 00:18:45.946 }, 00:18:45.946 { 00:18:45.946 "name": "BaseBdev2", 00:18:45.946 "uuid": "e64354db-5553-4de5-849c-98d226d0c0e4", 00:18:45.946 "is_configured": true, 00:18:45.946 "data_offset": 256, 00:18:45.946 "data_size": 7936 00:18:45.946 } 00:18:45.946 ] 00:18:45.946 }' 00:18:45.946 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.946 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.206 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.207 [2024-09-28 16:20:00.850353] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:46.207 [2024-09-28 16:20:00.850451] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.467 [2024-09-28 16:20:00.940263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.467 [2024-09-28 16:20:00.940381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.467 [2024-09-28 16:20:00.940421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:46.467 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88468 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88468 ']' 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88468 00:18:46.468 16:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:46.468 16:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.468 16:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88468 00:18:46.468 16:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:46.468 killing process with pid 88468 00:18:46.468 16:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:46.468 16:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88468' 00:18:46.468 16:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88468 00:18:46.468 [2024-09-28 16:20:01.038763] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.468 16:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88468 00:18:46.468 [2024-09-28 16:20:01.054734] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:47.850 16:20:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:47.850 00:18:47.850 real 0m5.196s 00:18:47.850 user 0m7.424s 00:18:47.850 sys 0m0.922s 00:18:47.850 16:20:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:47.850 ************************************ 00:18:47.850 END TEST raid_state_function_test_sb_md_interleaved 00:18:47.850 ************************************ 00:18:47.850 16:20:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.850 16:20:02 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:47.850 16:20:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:47.850 16:20:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:47.850 16:20:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.850 ************************************ 00:18:47.850 START TEST raid_superblock_test_md_interleaved 00:18:47.851 ************************************ 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88729 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88729 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88729 ']' 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.851 16:20:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.851 [2024-09-28 16:20:02.431458] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:47.851 [2024-09-28 16:20:02.431593] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88729 ] 00:18:48.110 [2024-09-28 16:20:02.600761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.370 [2024-09-28 16:20:02.796981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.370 [2024-09-28 16:20:02.987051] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.370 [2024-09-28 16:20:02.987153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.630 malloc1 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.630 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.630 [2024-09-28 16:20:03.291169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:48.630 [2024-09-28 16:20:03.291296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.630 [2024-09-28 16:20:03.291339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:48.630 [2024-09-28 16:20:03.291371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.630 [2024-09-28 16:20:03.293079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.631 [2024-09-28 16:20:03.293151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:48.631 pt1 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.631 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.891 malloc2 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.891 [2024-09-28 16:20:03.359768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.891 [2024-09-28 16:20:03.359822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.891 [2024-09-28 16:20:03.359842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:48.891 [2024-09-28 16:20:03.359851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.891 [2024-09-28 16:20:03.361567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.891 [2024-09-28 16:20:03.361604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.891 pt2 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.891 [2024-09-28 16:20:03.371821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:48.891 [2024-09-28 16:20:03.373499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.891 [2024-09-28 16:20:03.373710] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:48.891 [2024-09-28 16:20:03.373728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:48.891 [2024-09-28 16:20:03.373795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:48.891 [2024-09-28 16:20:03.373856] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:48.891 [2024-09-28 16:20:03.373868] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:48.891 [2024-09-28 16:20:03.373933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.891 "name": "raid_bdev1", 00:18:48.891 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:48.891 "strip_size_kb": 0, 00:18:48.891 "state": "online", 00:18:48.891 "raid_level": "raid1", 00:18:48.891 "superblock": true, 00:18:48.891 "num_base_bdevs": 2, 00:18:48.891 "num_base_bdevs_discovered": 2, 00:18:48.891 "num_base_bdevs_operational": 2, 00:18:48.891 "base_bdevs_list": [ 00:18:48.891 { 00:18:48.891 "name": "pt1", 00:18:48.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:48.891 "is_configured": true, 00:18:48.891 "data_offset": 256, 00:18:48.891 "data_size": 7936 00:18:48.891 }, 00:18:48.891 { 00:18:48.891 "name": "pt2", 00:18:48.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:48.891 "is_configured": true, 00:18:48.891 "data_offset": 256, 00:18:48.891 "data_size": 7936 00:18:48.891 } 00:18:48.891 ] 00:18:48.891 }' 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.891 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.151 [2024-09-28 16:20:03.763481] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.151 "name": "raid_bdev1", 00:18:49.151 "aliases": [ 00:18:49.151 "c9614f0a-4ab7-415d-be97-c40be8de6401" 00:18:49.151 ], 00:18:49.151 "product_name": "Raid Volume", 00:18:49.151 "block_size": 4128, 00:18:49.151 "num_blocks": 7936, 00:18:49.151 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:49.151 "md_size": 32, 00:18:49.151 "md_interleave": true, 00:18:49.151 "dif_type": 0, 00:18:49.151 "assigned_rate_limits": { 00:18:49.151 "rw_ios_per_sec": 0, 00:18:49.151 "rw_mbytes_per_sec": 0, 00:18:49.151 "r_mbytes_per_sec": 0, 00:18:49.151 "w_mbytes_per_sec": 0 00:18:49.151 }, 00:18:49.151 "claimed": false, 00:18:49.151 "zoned": false, 00:18:49.151 "supported_io_types": { 00:18:49.151 "read": true, 00:18:49.151 "write": true, 00:18:49.151 "unmap": false, 00:18:49.151 "flush": false, 00:18:49.151 "reset": true, 00:18:49.151 "nvme_admin": false, 00:18:49.151 "nvme_io": false, 00:18:49.151 "nvme_io_md": false, 00:18:49.151 "write_zeroes": true, 00:18:49.151 "zcopy": false, 00:18:49.151 "get_zone_info": false, 00:18:49.151 "zone_management": false, 00:18:49.151 "zone_append": false, 00:18:49.151 "compare": false, 00:18:49.151 "compare_and_write": false, 00:18:49.151 "abort": false, 00:18:49.151 "seek_hole": false, 00:18:49.151 "seek_data": false, 00:18:49.151 "copy": false, 00:18:49.151 "nvme_iov_md": false 00:18:49.151 }, 00:18:49.151 "memory_domains": [ 00:18:49.151 { 00:18:49.151 "dma_device_id": "system", 00:18:49.151 "dma_device_type": 1 00:18:49.151 }, 00:18:49.151 { 00:18:49.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.151 "dma_device_type": 2 00:18:49.151 }, 00:18:49.151 { 00:18:49.151 "dma_device_id": "system", 00:18:49.151 "dma_device_type": 1 00:18:49.151 }, 00:18:49.151 { 00:18:49.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.151 "dma_device_type": 2 00:18:49.151 } 00:18:49.151 ], 00:18:49.151 "driver_specific": { 00:18:49.151 "raid": { 00:18:49.151 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:49.151 "strip_size_kb": 0, 00:18:49.151 "state": "online", 00:18:49.151 "raid_level": "raid1", 00:18:49.151 "superblock": true, 00:18:49.151 "num_base_bdevs": 2, 00:18:49.151 "num_base_bdevs_discovered": 2, 00:18:49.151 "num_base_bdevs_operational": 2, 00:18:49.151 "base_bdevs_list": [ 00:18:49.151 { 00:18:49.151 "name": "pt1", 00:18:49.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:49.151 "is_configured": true, 00:18:49.151 "data_offset": 256, 00:18:49.151 "data_size": 7936 00:18:49.151 }, 00:18:49.151 { 00:18:49.151 "name": "pt2", 00:18:49.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.151 "is_configured": true, 00:18:49.151 "data_offset": 256, 00:18:49.151 "data_size": 7936 00:18:49.151 } 00:18:49.151 ] 00:18:49.151 } 00:18:49.151 } 00:18:49.151 }' 00:18:49.151 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:49.412 pt2' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:49.412 16:20:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:49.412 [2024-09-28 16:20:04.006996] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c9614f0a-4ab7-415d-be97-c40be8de6401 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z c9614f0a-4ab7-415d-be97-c40be8de6401 ']' 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 [2024-09-28 16:20:04.054686] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.412 [2024-09-28 16:20:04.054707] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.412 [2024-09-28 16:20:04.054769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.412 [2024-09-28 16:20:04.054811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.412 [2024-09-28 16:20:04.054821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:49.673 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.674 [2024-09-28 16:20:04.186491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:49.674 [2024-09-28 16:20:04.188265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:49.674 [2024-09-28 16:20:04.188339] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:49.674 [2024-09-28 16:20:04.188382] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:49.674 [2024-09-28 16:20:04.188395] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.674 [2024-09-28 16:20:04.188405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:49.674 request: 00:18:49.674 { 00:18:49.674 "name": "raid_bdev1", 00:18:49.674 "raid_level": "raid1", 00:18:49.674 "base_bdevs": [ 00:18:49.674 "malloc1", 00:18:49.674 "malloc2" 00:18:49.674 ], 00:18:49.674 "superblock": false, 00:18:49.674 "method": "bdev_raid_create", 00:18:49.674 "req_id": 1 00:18:49.674 } 00:18:49.674 Got JSON-RPC error response 00:18:49.674 response: 00:18:49.674 { 00:18:49.674 "code": -17, 00:18:49.674 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:49.674 } 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.674 [2024-09-28 16:20:04.246350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:49.674 [2024-09-28 16:20:04.246441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.674 [2024-09-28 16:20:04.246470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:49.674 [2024-09-28 16:20:04.246503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.674 [2024-09-28 16:20:04.248272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.674 [2024-09-28 16:20:04.248340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:49.674 [2024-09-28 16:20:04.248398] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:49.674 [2024-09-28 16:20:04.248468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:49.674 pt1 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.674 "name": "raid_bdev1", 00:18:49.674 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:49.674 "strip_size_kb": 0, 00:18:49.674 "state": "configuring", 00:18:49.674 "raid_level": "raid1", 00:18:49.674 "superblock": true, 00:18:49.674 "num_base_bdevs": 2, 00:18:49.674 "num_base_bdevs_discovered": 1, 00:18:49.674 "num_base_bdevs_operational": 2, 00:18:49.674 "base_bdevs_list": [ 00:18:49.674 { 00:18:49.674 "name": "pt1", 00:18:49.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:49.674 "is_configured": true, 00:18:49.674 "data_offset": 256, 00:18:49.674 "data_size": 7936 00:18:49.674 }, 00:18:49.674 { 00:18:49.674 "name": null, 00:18:49.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.674 "is_configured": false, 00:18:49.674 "data_offset": 256, 00:18:49.674 "data_size": 7936 00:18:49.674 } 00:18:49.674 ] 00:18:49.674 }' 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.674 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.245 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:50.245 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:50.245 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:50.245 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:50.245 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.245 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.245 [2024-09-28 16:20:04.705694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:50.245 [2024-09-28 16:20:04.705747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.245 [2024-09-28 16:20:04.705764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:50.245 [2024-09-28 16:20:04.705773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.245 [2024-09-28 16:20:04.705875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.245 [2024-09-28 16:20:04.705891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:50.245 [2024-09-28 16:20:04.705924] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:50.245 [2024-09-28 16:20:04.705948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.245 [2024-09-28 16:20:04.706023] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:50.245 [2024-09-28 16:20:04.706033] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:50.245 [2024-09-28 16:20:04.706092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:50.245 [2024-09-28 16:20:04.706145] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:50.245 [2024-09-28 16:20:04.706153] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:50.245 [2024-09-28 16:20:04.706200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.246 pt2 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.246 "name": "raid_bdev1", 00:18:50.246 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:50.246 "strip_size_kb": 0, 00:18:50.246 "state": "online", 00:18:50.246 "raid_level": "raid1", 00:18:50.246 "superblock": true, 00:18:50.246 "num_base_bdevs": 2, 00:18:50.246 "num_base_bdevs_discovered": 2, 00:18:50.246 "num_base_bdevs_operational": 2, 00:18:50.246 "base_bdevs_list": [ 00:18:50.246 { 00:18:50.246 "name": "pt1", 00:18:50.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.246 "is_configured": true, 00:18:50.246 "data_offset": 256, 00:18:50.246 "data_size": 7936 00:18:50.246 }, 00:18:50.246 { 00:18:50.246 "name": "pt2", 00:18:50.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.246 "is_configured": true, 00:18:50.246 "data_offset": 256, 00:18:50.246 "data_size": 7936 00:18:50.246 } 00:18:50.246 ] 00:18:50.246 }' 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.246 16:20:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.506 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:50.506 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:50.506 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:50.506 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:50.506 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:50.506 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.767 [2024-09-28 16:20:05.201081] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:50.767 "name": "raid_bdev1", 00:18:50.767 "aliases": [ 00:18:50.767 "c9614f0a-4ab7-415d-be97-c40be8de6401" 00:18:50.767 ], 00:18:50.767 "product_name": "Raid Volume", 00:18:50.767 "block_size": 4128, 00:18:50.767 "num_blocks": 7936, 00:18:50.767 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:50.767 "md_size": 32, 00:18:50.767 "md_interleave": true, 00:18:50.767 "dif_type": 0, 00:18:50.767 "assigned_rate_limits": { 00:18:50.767 "rw_ios_per_sec": 0, 00:18:50.767 "rw_mbytes_per_sec": 0, 00:18:50.767 "r_mbytes_per_sec": 0, 00:18:50.767 "w_mbytes_per_sec": 0 00:18:50.767 }, 00:18:50.767 "claimed": false, 00:18:50.767 "zoned": false, 00:18:50.767 "supported_io_types": { 00:18:50.767 "read": true, 00:18:50.767 "write": true, 00:18:50.767 "unmap": false, 00:18:50.767 "flush": false, 00:18:50.767 "reset": true, 00:18:50.767 "nvme_admin": false, 00:18:50.767 "nvme_io": false, 00:18:50.767 "nvme_io_md": false, 00:18:50.767 "write_zeroes": true, 00:18:50.767 "zcopy": false, 00:18:50.767 "get_zone_info": false, 00:18:50.767 "zone_management": false, 00:18:50.767 "zone_append": false, 00:18:50.767 "compare": false, 00:18:50.767 "compare_and_write": false, 00:18:50.767 "abort": false, 00:18:50.767 "seek_hole": false, 00:18:50.767 "seek_data": false, 00:18:50.767 "copy": false, 00:18:50.767 "nvme_iov_md": false 00:18:50.767 }, 00:18:50.767 "memory_domains": [ 00:18:50.767 { 00:18:50.767 "dma_device_id": "system", 00:18:50.767 "dma_device_type": 1 00:18:50.767 }, 00:18:50.767 { 00:18:50.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.767 "dma_device_type": 2 00:18:50.767 }, 00:18:50.767 { 00:18:50.767 "dma_device_id": "system", 00:18:50.767 "dma_device_type": 1 00:18:50.767 }, 00:18:50.767 { 00:18:50.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.767 "dma_device_type": 2 00:18:50.767 } 00:18:50.767 ], 00:18:50.767 "driver_specific": { 00:18:50.767 "raid": { 00:18:50.767 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:50.767 "strip_size_kb": 0, 00:18:50.767 "state": "online", 00:18:50.767 "raid_level": "raid1", 00:18:50.767 "superblock": true, 00:18:50.767 "num_base_bdevs": 2, 00:18:50.767 "num_base_bdevs_discovered": 2, 00:18:50.767 "num_base_bdevs_operational": 2, 00:18:50.767 "base_bdevs_list": [ 00:18:50.767 { 00:18:50.767 "name": "pt1", 00:18:50.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.767 "is_configured": true, 00:18:50.767 "data_offset": 256, 00:18:50.767 "data_size": 7936 00:18:50.767 }, 00:18:50.767 { 00:18:50.767 "name": "pt2", 00:18:50.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.767 "is_configured": true, 00:18:50.767 "data_offset": 256, 00:18:50.767 "data_size": 7936 00:18:50.767 } 00:18:50.767 ] 00:18:50.767 } 00:18:50.767 } 00:18:50.767 }' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:50.767 pt2' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.767 [2024-09-28 16:20:05.424682] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.767 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' c9614f0a-4ab7-415d-be97-c40be8de6401 '!=' c9614f0a-4ab7-415d-be97-c40be8de6401 ']' 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.028 [2024-09-28 16:20:05.468433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.028 "name": "raid_bdev1", 00:18:51.028 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:51.028 "strip_size_kb": 0, 00:18:51.028 "state": "online", 00:18:51.028 "raid_level": "raid1", 00:18:51.028 "superblock": true, 00:18:51.028 "num_base_bdevs": 2, 00:18:51.028 "num_base_bdevs_discovered": 1, 00:18:51.028 "num_base_bdevs_operational": 1, 00:18:51.028 "base_bdevs_list": [ 00:18:51.028 { 00:18:51.028 "name": null, 00:18:51.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.028 "is_configured": false, 00:18:51.028 "data_offset": 0, 00:18:51.028 "data_size": 7936 00:18:51.028 }, 00:18:51.028 { 00:18:51.028 "name": "pt2", 00:18:51.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.028 "is_configured": true, 00:18:51.028 "data_offset": 256, 00:18:51.028 "data_size": 7936 00:18:51.028 } 00:18:51.028 ] 00:18:51.028 }' 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.028 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.288 [2024-09-28 16:20:05.911646] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.288 [2024-09-28 16:20:05.911706] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.288 [2024-09-28 16:20:05.911771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.288 [2024-09-28 16:20:05.911822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.288 [2024-09-28 16:20:05.911854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.288 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.548 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.548 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:51.548 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:51.548 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:51.548 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:51.548 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.549 [2024-09-28 16:20:05.987547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:51.549 [2024-09-28 16:20:05.987596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.549 [2024-09-28 16:20:05.987611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:51.549 [2024-09-28 16:20:05.987621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.549 [2024-09-28 16:20:05.989468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.549 [2024-09-28 16:20:05.989507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:51.549 [2024-09-28 16:20:05.989548] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:51.549 [2024-09-28 16:20:05.989601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:51.549 [2024-09-28 16:20:05.989653] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:51.549 [2024-09-28 16:20:05.989664] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:51.549 [2024-09-28 16:20:05.989743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.549 [2024-09-28 16:20:05.989804] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:51.549 [2024-09-28 16:20:05.989811] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:51.549 [2024-09-28 16:20:05.989859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.549 pt2 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.549 16:20:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.549 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.549 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.549 "name": "raid_bdev1", 00:18:51.549 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:51.549 "strip_size_kb": 0, 00:18:51.549 "state": "online", 00:18:51.549 "raid_level": "raid1", 00:18:51.549 "superblock": true, 00:18:51.549 "num_base_bdevs": 2, 00:18:51.549 "num_base_bdevs_discovered": 1, 00:18:51.549 "num_base_bdevs_operational": 1, 00:18:51.549 "base_bdevs_list": [ 00:18:51.549 { 00:18:51.549 "name": null, 00:18:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.549 "is_configured": false, 00:18:51.549 "data_offset": 256, 00:18:51.549 "data_size": 7936 00:18:51.549 }, 00:18:51.549 { 00:18:51.549 "name": "pt2", 00:18:51.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.549 "is_configured": true, 00:18:51.549 "data_offset": 256, 00:18:51.549 "data_size": 7936 00:18:51.549 } 00:18:51.549 ] 00:18:51.549 }' 00:18:51.549 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.549 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 [2024-09-28 16:20:06.462667] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.809 [2024-09-28 16:20:06.462733] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.809 [2024-09-28 16:20:06.462797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.809 [2024-09-28 16:20:06.462856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.809 [2024-09-28 16:20:06.462905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.809 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.070 [2024-09-28 16:20:06.522587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:52.070 [2024-09-28 16:20:06.522669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.070 [2024-09-28 16:20:06.522700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:52.070 [2024-09-28 16:20:06.522726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.070 [2024-09-28 16:20:06.524496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.070 [2024-09-28 16:20:06.524565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:52.070 [2024-09-28 16:20:06.524626] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:52.070 [2024-09-28 16:20:06.524680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:52.070 [2024-09-28 16:20:06.524772] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:52.070 [2024-09-28 16:20:06.524830] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.070 [2024-09-28 16:20:06.524862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:52.070 [2024-09-28 16:20:06.524975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:52.070 [2024-09-28 16:20:06.525076] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:52.070 [2024-09-28 16:20:06.525111] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:52.070 [2024-09-28 16:20:06.525178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:52.070 [2024-09-28 16:20:06.525277] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:52.070 [2024-09-28 16:20:06.525317] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:52.070 [2024-09-28 16:20:06.525411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.070 pt1 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.070 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.071 "name": "raid_bdev1", 00:18:52.071 "uuid": "c9614f0a-4ab7-415d-be97-c40be8de6401", 00:18:52.071 "strip_size_kb": 0, 00:18:52.071 "state": "online", 00:18:52.071 "raid_level": "raid1", 00:18:52.071 "superblock": true, 00:18:52.071 "num_base_bdevs": 2, 00:18:52.071 "num_base_bdevs_discovered": 1, 00:18:52.071 "num_base_bdevs_operational": 1, 00:18:52.071 "base_bdevs_list": [ 00:18:52.071 { 00:18:52.071 "name": null, 00:18:52.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.071 "is_configured": false, 00:18:52.071 "data_offset": 256, 00:18:52.071 "data_size": 7936 00:18:52.071 }, 00:18:52.071 { 00:18:52.071 "name": "pt2", 00:18:52.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.071 "is_configured": true, 00:18:52.071 "data_offset": 256, 00:18:52.071 "data_size": 7936 00:18:52.071 } 00:18:52.071 ] 00:18:52.071 }' 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.071 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.331 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:52.331 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:52.331 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.331 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.331 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.331 16:20:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:52.331 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:52.331 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:52.331 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.331 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.331 [2024-09-28 16:20:07.009937] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' c9614f0a-4ab7-415d-be97-c40be8de6401 '!=' c9614f0a-4ab7-415d-be97-c40be8de6401 ']' 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88729 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88729 ']' 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88729 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88729 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88729' 00:18:52.592 killing process with pid 88729 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 88729 00:18:52.592 [2024-09-28 16:20:07.089892] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.592 [2024-09-28 16:20:07.089947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.592 [2024-09-28 16:20:07.089978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.592 [2024-09-28 16:20:07.089992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:52.592 16:20:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 88729 00:18:52.852 [2024-09-28 16:20:07.285158] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.790 16:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:53.790 00:18:53.790 real 0m6.142s 00:18:53.790 user 0m9.247s 00:18:53.790 sys 0m1.115s 00:18:53.790 ************************************ 00:18:53.790 END TEST raid_superblock_test_md_interleaved 00:18:53.790 ************************************ 00:18:53.790 16:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.790 16:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.051 16:20:08 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:54.051 16:20:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:54.051 16:20:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:54.051 16:20:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.051 ************************************ 00:18:54.051 START TEST raid_rebuild_test_sb_md_interleaved 00:18:54.051 ************************************ 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89053 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89053 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89053 ']' 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.051 16:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.051 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:54.051 Zero copy mechanism will not be used. 00:18:54.051 [2024-09-28 16:20:08.654013] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:54.052 [2024-09-28 16:20:08.654121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89053 ] 00:18:54.312 [2024-09-28 16:20:08.817671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.571 [2024-09-28 16:20:09.014202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.571 [2024-09-28 16:20:09.187927] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.571 [2024-09-28 16:20:09.187966] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.831 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.831 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:54.831 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:54.831 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:54.831 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.831 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.091 BaseBdev1_malloc 00:18:55.091 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 [2024-09-28 16:20:09.556605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:55.092 [2024-09-28 16:20:09.556669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.092 [2024-09-28 16:20:09.556690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:55.092 [2024-09-28 16:20:09.556701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.092 [2024-09-28 16:20:09.558380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.092 [2024-09-28 16:20:09.558501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:55.092 BaseBdev1 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 BaseBdev2_malloc 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 [2024-09-28 16:20:09.640264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:55.092 [2024-09-28 16:20:09.640322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.092 [2024-09-28 16:20:09.640342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:55.092 [2024-09-28 16:20:09.640353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.092 [2024-09-28 16:20:09.641992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.092 [2024-09-28 16:20:09.642030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:55.092 BaseBdev2 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 spare_malloc 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 spare_delay 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 [2024-09-28 16:20:09.706198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:55.092 [2024-09-28 16:20:09.706346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.092 [2024-09-28 16:20:09.706371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:55.092 [2024-09-28 16:20:09.706383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.092 [2024-09-28 16:20:09.708112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.092 [2024-09-28 16:20:09.708157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:55.092 spare 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 [2024-09-28 16:20:09.718245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.092 [2024-09-28 16:20:09.719903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.092 [2024-09-28 16:20:09.720091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:55.092 [2024-09-28 16:20:09.720105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:55.092 [2024-09-28 16:20:09.720177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:55.092 [2024-09-28 16:20:09.720253] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:55.092 [2024-09-28 16:20:09.720261] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:55.092 [2024-09-28 16:20:09.720321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.092 "name": "raid_bdev1", 00:18:55.092 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:18:55.092 "strip_size_kb": 0, 00:18:55.092 "state": "online", 00:18:55.092 "raid_level": "raid1", 00:18:55.092 "superblock": true, 00:18:55.092 "num_base_bdevs": 2, 00:18:55.092 "num_base_bdevs_discovered": 2, 00:18:55.092 "num_base_bdevs_operational": 2, 00:18:55.092 "base_bdevs_list": [ 00:18:55.092 { 00:18:55.092 "name": "BaseBdev1", 00:18:55.092 "uuid": "91a3e6f3-75ce-505c-840c-4d49245467e2", 00:18:55.092 "is_configured": true, 00:18:55.092 "data_offset": 256, 00:18:55.092 "data_size": 7936 00:18:55.092 }, 00:18:55.092 { 00:18:55.092 "name": "BaseBdev2", 00:18:55.092 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:18:55.092 "is_configured": true, 00:18:55.092 "data_offset": 256, 00:18:55.092 "data_size": 7936 00:18:55.092 } 00:18:55.092 ] 00:18:55.092 }' 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.092 16:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:55.661 [2024-09-28 16:20:10.181591] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:55.661 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.662 [2024-09-28 16:20:10.281199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.662 "name": "raid_bdev1", 00:18:55.662 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:18:55.662 "strip_size_kb": 0, 00:18:55.662 "state": "online", 00:18:55.662 "raid_level": "raid1", 00:18:55.662 "superblock": true, 00:18:55.662 "num_base_bdevs": 2, 00:18:55.662 "num_base_bdevs_discovered": 1, 00:18:55.662 "num_base_bdevs_operational": 1, 00:18:55.662 "base_bdevs_list": [ 00:18:55.662 { 00:18:55.662 "name": null, 00:18:55.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.662 "is_configured": false, 00:18:55.662 "data_offset": 0, 00:18:55.662 "data_size": 7936 00:18:55.662 }, 00:18:55.662 { 00:18:55.662 "name": "BaseBdev2", 00:18:55.662 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:18:55.662 "is_configured": true, 00:18:55.662 "data_offset": 256, 00:18:55.662 "data_size": 7936 00:18:55.662 } 00:18:55.662 ] 00:18:55.662 }' 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.662 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.231 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:56.231 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.231 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.231 [2024-09-28 16:20:10.736430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.231 [2024-09-28 16:20:10.751749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:56.231 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.231 16:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:56.231 [2024-09-28 16:20:10.753404] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.168 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.168 "name": "raid_bdev1", 00:18:57.168 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:18:57.168 "strip_size_kb": 0, 00:18:57.168 "state": "online", 00:18:57.168 "raid_level": "raid1", 00:18:57.168 "superblock": true, 00:18:57.168 "num_base_bdevs": 2, 00:18:57.168 "num_base_bdevs_discovered": 2, 00:18:57.168 "num_base_bdevs_operational": 2, 00:18:57.168 "process": { 00:18:57.168 "type": "rebuild", 00:18:57.168 "target": "spare", 00:18:57.168 "progress": { 00:18:57.168 "blocks": 2560, 00:18:57.169 "percent": 32 00:18:57.169 } 00:18:57.169 }, 00:18:57.169 "base_bdevs_list": [ 00:18:57.169 { 00:18:57.169 "name": "spare", 00:18:57.169 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:18:57.169 "is_configured": true, 00:18:57.169 "data_offset": 256, 00:18:57.169 "data_size": 7936 00:18:57.169 }, 00:18:57.169 { 00:18:57.169 "name": "BaseBdev2", 00:18:57.169 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:18:57.169 "is_configured": true, 00:18:57.169 "data_offset": 256, 00:18:57.169 "data_size": 7936 00:18:57.169 } 00:18:57.169 ] 00:18:57.169 }' 00:18:57.169 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.428 [2024-09-28 16:20:11.917166] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.428 [2024-09-28 16:20:11.958074] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:57.428 [2024-09-28 16:20:11.958130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.428 [2024-09-28 16:20:11.958144] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.428 [2024-09-28 16:20:11.958153] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.428 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.429 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.429 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.429 16:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.429 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.429 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.429 "name": "raid_bdev1", 00:18:57.429 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:18:57.429 "strip_size_kb": 0, 00:18:57.429 "state": "online", 00:18:57.429 "raid_level": "raid1", 00:18:57.429 "superblock": true, 00:18:57.429 "num_base_bdevs": 2, 00:18:57.429 "num_base_bdevs_discovered": 1, 00:18:57.429 "num_base_bdevs_operational": 1, 00:18:57.429 "base_bdevs_list": [ 00:18:57.429 { 00:18:57.429 "name": null, 00:18:57.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.429 "is_configured": false, 00:18:57.429 "data_offset": 0, 00:18:57.429 "data_size": 7936 00:18:57.429 }, 00:18:57.429 { 00:18:57.429 "name": "BaseBdev2", 00:18:57.429 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:18:57.429 "is_configured": true, 00:18:57.429 "data_offset": 256, 00:18:57.429 "data_size": 7936 00:18:57.429 } 00:18:57.429 ] 00:18:57.429 }' 00:18:57.429 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.429 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.997 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.997 "name": "raid_bdev1", 00:18:57.997 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:18:57.997 "strip_size_kb": 0, 00:18:57.997 "state": "online", 00:18:57.997 "raid_level": "raid1", 00:18:57.997 "superblock": true, 00:18:57.997 "num_base_bdevs": 2, 00:18:57.997 "num_base_bdevs_discovered": 1, 00:18:57.997 "num_base_bdevs_operational": 1, 00:18:57.997 "base_bdevs_list": [ 00:18:57.997 { 00:18:57.997 "name": null, 00:18:57.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.997 "is_configured": false, 00:18:57.997 "data_offset": 0, 00:18:57.997 "data_size": 7936 00:18:57.997 }, 00:18:57.998 { 00:18:57.998 "name": "BaseBdev2", 00:18:57.998 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:18:57.998 "is_configured": true, 00:18:57.998 "data_offset": 256, 00:18:57.998 "data_size": 7936 00:18:57.998 } 00:18:57.998 ] 00:18:57.998 }' 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.998 [2024-09-28 16:20:12.558862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.998 [2024-09-28 16:20:12.573444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.998 16:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:57.998 [2024-09-28 16:20:12.575100] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.937 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.197 "name": "raid_bdev1", 00:18:59.197 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:18:59.197 "strip_size_kb": 0, 00:18:59.197 "state": "online", 00:18:59.197 "raid_level": "raid1", 00:18:59.197 "superblock": true, 00:18:59.197 "num_base_bdevs": 2, 00:18:59.197 "num_base_bdevs_discovered": 2, 00:18:59.197 "num_base_bdevs_operational": 2, 00:18:59.197 "process": { 00:18:59.197 "type": "rebuild", 00:18:59.197 "target": "spare", 00:18:59.197 "progress": { 00:18:59.197 "blocks": 2560, 00:18:59.197 "percent": 32 00:18:59.197 } 00:18:59.197 }, 00:18:59.197 "base_bdevs_list": [ 00:18:59.197 { 00:18:59.197 "name": "spare", 00:18:59.197 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:18:59.197 "is_configured": true, 00:18:59.197 "data_offset": 256, 00:18:59.197 "data_size": 7936 00:18:59.197 }, 00:18:59.197 { 00:18:59.197 "name": "BaseBdev2", 00:18:59.197 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:18:59.197 "is_configured": true, 00:18:59.197 "data_offset": 256, 00:18:59.197 "data_size": 7936 00:18:59.197 } 00:18:59.197 ] 00:18:59.197 }' 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:59.197 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:59.197 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=746 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.198 "name": "raid_bdev1", 00:18:59.198 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:18:59.198 "strip_size_kb": 0, 00:18:59.198 "state": "online", 00:18:59.198 "raid_level": "raid1", 00:18:59.198 "superblock": true, 00:18:59.198 "num_base_bdevs": 2, 00:18:59.198 "num_base_bdevs_discovered": 2, 00:18:59.198 "num_base_bdevs_operational": 2, 00:18:59.198 "process": { 00:18:59.198 "type": "rebuild", 00:18:59.198 "target": "spare", 00:18:59.198 "progress": { 00:18:59.198 "blocks": 2816, 00:18:59.198 "percent": 35 00:18:59.198 } 00:18:59.198 }, 00:18:59.198 "base_bdevs_list": [ 00:18:59.198 { 00:18:59.198 "name": "spare", 00:18:59.198 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:18:59.198 "is_configured": true, 00:18:59.198 "data_offset": 256, 00:18:59.198 "data_size": 7936 00:18:59.198 }, 00:18:59.198 { 00:18:59.198 "name": "BaseBdev2", 00:18:59.198 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:18:59.198 "is_configured": true, 00:18:59.198 "data_offset": 256, 00:18:59.198 "data_size": 7936 00:18:59.198 } 00:18:59.198 ] 00:18:59.198 }' 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.198 16:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.580 "name": "raid_bdev1", 00:19:00.580 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:00.580 "strip_size_kb": 0, 00:19:00.580 "state": "online", 00:19:00.580 "raid_level": "raid1", 00:19:00.580 "superblock": true, 00:19:00.580 "num_base_bdevs": 2, 00:19:00.580 "num_base_bdevs_discovered": 2, 00:19:00.580 "num_base_bdevs_operational": 2, 00:19:00.580 "process": { 00:19:00.580 "type": "rebuild", 00:19:00.580 "target": "spare", 00:19:00.580 "progress": { 00:19:00.580 "blocks": 5632, 00:19:00.580 "percent": 70 00:19:00.580 } 00:19:00.580 }, 00:19:00.580 "base_bdevs_list": [ 00:19:00.580 { 00:19:00.580 "name": "spare", 00:19:00.580 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:00.580 "is_configured": true, 00:19:00.580 "data_offset": 256, 00:19:00.580 "data_size": 7936 00:19:00.580 }, 00:19:00.580 { 00:19:00.580 "name": "BaseBdev2", 00:19:00.580 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:00.580 "is_configured": true, 00:19:00.580 "data_offset": 256, 00:19:00.580 "data_size": 7936 00:19:00.580 } 00:19:00.580 ] 00:19:00.580 }' 00:19:00.580 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.581 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.581 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.581 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.581 16:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.150 [2024-09-28 16:20:15.686420] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:01.150 [2024-09-28 16:20:15.686485] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:01.150 [2024-09-28 16:20:15.686570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.408 16:20:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:01.408 16:20:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.408 16:20:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.408 16:20:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.408 16:20:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.408 16:20:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.408 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.408 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.408 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.408 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.408 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.408 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.408 "name": "raid_bdev1", 00:19:01.408 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:01.408 "strip_size_kb": 0, 00:19:01.408 "state": "online", 00:19:01.408 "raid_level": "raid1", 00:19:01.408 "superblock": true, 00:19:01.408 "num_base_bdevs": 2, 00:19:01.408 "num_base_bdevs_discovered": 2, 00:19:01.408 "num_base_bdevs_operational": 2, 00:19:01.408 "base_bdevs_list": [ 00:19:01.408 { 00:19:01.408 "name": "spare", 00:19:01.408 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:01.408 "is_configured": true, 00:19:01.408 "data_offset": 256, 00:19:01.408 "data_size": 7936 00:19:01.408 }, 00:19:01.408 { 00:19:01.408 "name": "BaseBdev2", 00:19:01.408 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:01.408 "is_configured": true, 00:19:01.408 "data_offset": 256, 00:19:01.408 "data_size": 7936 00:19:01.408 } 00:19:01.408 ] 00:19:01.408 }' 00:19:01.408 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.667 "name": "raid_bdev1", 00:19:01.667 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:01.667 "strip_size_kb": 0, 00:19:01.667 "state": "online", 00:19:01.667 "raid_level": "raid1", 00:19:01.667 "superblock": true, 00:19:01.667 "num_base_bdevs": 2, 00:19:01.667 "num_base_bdevs_discovered": 2, 00:19:01.667 "num_base_bdevs_operational": 2, 00:19:01.667 "base_bdevs_list": [ 00:19:01.667 { 00:19:01.667 "name": "spare", 00:19:01.667 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:01.667 "is_configured": true, 00:19:01.667 "data_offset": 256, 00:19:01.667 "data_size": 7936 00:19:01.667 }, 00:19:01.667 { 00:19:01.667 "name": "BaseBdev2", 00:19:01.667 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:01.667 "is_configured": true, 00:19:01.667 "data_offset": 256, 00:19:01.667 "data_size": 7936 00:19:01.667 } 00:19:01.667 ] 00:19:01.667 }' 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.667 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.668 "name": "raid_bdev1", 00:19:01.668 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:01.668 "strip_size_kb": 0, 00:19:01.668 "state": "online", 00:19:01.668 "raid_level": "raid1", 00:19:01.668 "superblock": true, 00:19:01.668 "num_base_bdevs": 2, 00:19:01.668 "num_base_bdevs_discovered": 2, 00:19:01.668 "num_base_bdevs_operational": 2, 00:19:01.668 "base_bdevs_list": [ 00:19:01.668 { 00:19:01.668 "name": "spare", 00:19:01.668 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:01.668 "is_configured": true, 00:19:01.668 "data_offset": 256, 00:19:01.668 "data_size": 7936 00:19:01.668 }, 00:19:01.668 { 00:19:01.668 "name": "BaseBdev2", 00:19:01.668 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:01.668 "is_configured": true, 00:19:01.668 "data_offset": 256, 00:19:01.668 "data_size": 7936 00:19:01.668 } 00:19:01.668 ] 00:19:01.668 }' 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.668 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.239 [2024-09-28 16:20:16.737624] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.239 [2024-09-28 16:20:16.737653] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.239 [2024-09-28 16:20:16.737720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.239 [2024-09-28 16:20:16.737777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.239 [2024-09-28 16:20:16.737786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.239 [2024-09-28 16:20:16.813497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:02.239 [2024-09-28 16:20:16.813545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.239 [2024-09-28 16:20:16.813567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:02.239 [2024-09-28 16:20:16.813575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.239 [2024-09-28 16:20:16.815403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.239 [2024-09-28 16:20:16.815436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:02.239 [2024-09-28 16:20:16.815492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:02.239 [2024-09-28 16:20:16.815546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.239 [2024-09-28 16:20:16.815639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.239 spare 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.239 [2024-09-28 16:20:16.915532] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:02.239 [2024-09-28 16:20:16.915561] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:02.239 [2024-09-28 16:20:16.915638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:02.239 [2024-09-28 16:20:16.915707] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:02.239 [2024-09-28 16:20:16.915715] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:02.239 [2024-09-28 16:20:16.915783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.239 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.499 "name": "raid_bdev1", 00:19:02.499 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:02.499 "strip_size_kb": 0, 00:19:02.499 "state": "online", 00:19:02.499 "raid_level": "raid1", 00:19:02.499 "superblock": true, 00:19:02.499 "num_base_bdevs": 2, 00:19:02.499 "num_base_bdevs_discovered": 2, 00:19:02.499 "num_base_bdevs_operational": 2, 00:19:02.499 "base_bdevs_list": [ 00:19:02.499 { 00:19:02.499 "name": "spare", 00:19:02.499 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:02.499 "is_configured": true, 00:19:02.499 "data_offset": 256, 00:19:02.499 "data_size": 7936 00:19:02.499 }, 00:19:02.499 { 00:19:02.499 "name": "BaseBdev2", 00:19:02.499 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:02.499 "is_configured": true, 00:19:02.499 "data_offset": 256, 00:19:02.499 "data_size": 7936 00:19:02.499 } 00:19:02.499 ] 00:19:02.499 }' 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.499 16:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.759 "name": "raid_bdev1", 00:19:02.759 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:02.759 "strip_size_kb": 0, 00:19:02.759 "state": "online", 00:19:02.759 "raid_level": "raid1", 00:19:02.759 "superblock": true, 00:19:02.759 "num_base_bdevs": 2, 00:19:02.759 "num_base_bdevs_discovered": 2, 00:19:02.759 "num_base_bdevs_operational": 2, 00:19:02.759 "base_bdevs_list": [ 00:19:02.759 { 00:19:02.759 "name": "spare", 00:19:02.759 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:02.759 "is_configured": true, 00:19:02.759 "data_offset": 256, 00:19:02.759 "data_size": 7936 00:19:02.759 }, 00:19:02.759 { 00:19:02.759 "name": "BaseBdev2", 00:19:02.759 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:02.759 "is_configured": true, 00:19:02.759 "data_offset": 256, 00:19:02.759 "data_size": 7936 00:19:02.759 } 00:19:02.759 ] 00:19:02.759 }' 00:19:02.759 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.019 [2024-09-28 16:20:17.528329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.019 "name": "raid_bdev1", 00:19:03.019 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:03.019 "strip_size_kb": 0, 00:19:03.019 "state": "online", 00:19:03.019 "raid_level": "raid1", 00:19:03.019 "superblock": true, 00:19:03.019 "num_base_bdevs": 2, 00:19:03.019 "num_base_bdevs_discovered": 1, 00:19:03.019 "num_base_bdevs_operational": 1, 00:19:03.019 "base_bdevs_list": [ 00:19:03.019 { 00:19:03.019 "name": null, 00:19:03.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.019 "is_configured": false, 00:19:03.019 "data_offset": 0, 00:19:03.019 "data_size": 7936 00:19:03.019 }, 00:19:03.019 { 00:19:03.019 "name": "BaseBdev2", 00:19:03.019 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:03.019 "is_configured": true, 00:19:03.019 "data_offset": 256, 00:19:03.019 "data_size": 7936 00:19:03.019 } 00:19:03.019 ] 00:19:03.019 }' 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.019 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.588 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.588 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.588 16:20:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.588 [2024-09-28 16:20:17.991594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.588 [2024-09-28 16:20:17.991726] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.588 [2024-09-28 16:20:17.991743] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:03.588 [2024-09-28 16:20:17.991771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.588 [2024-09-28 16:20:18.005838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:03.588 16:20:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.588 16:20:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:03.588 [2024-09-28 16:20:18.007501] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.528 "name": "raid_bdev1", 00:19:04.528 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:04.528 "strip_size_kb": 0, 00:19:04.528 "state": "online", 00:19:04.528 "raid_level": "raid1", 00:19:04.528 "superblock": true, 00:19:04.528 "num_base_bdevs": 2, 00:19:04.528 "num_base_bdevs_discovered": 2, 00:19:04.528 "num_base_bdevs_operational": 2, 00:19:04.528 "process": { 00:19:04.528 "type": "rebuild", 00:19:04.528 "target": "spare", 00:19:04.528 "progress": { 00:19:04.528 "blocks": 2560, 00:19:04.528 "percent": 32 00:19:04.528 } 00:19:04.528 }, 00:19:04.528 "base_bdevs_list": [ 00:19:04.528 { 00:19:04.528 "name": "spare", 00:19:04.528 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:04.528 "is_configured": true, 00:19:04.528 "data_offset": 256, 00:19:04.528 "data_size": 7936 00:19:04.528 }, 00:19:04.528 { 00:19:04.528 "name": "BaseBdev2", 00:19:04.528 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:04.528 "is_configured": true, 00:19:04.528 "data_offset": 256, 00:19:04.528 "data_size": 7936 00:19:04.528 } 00:19:04.528 ] 00:19:04.528 }' 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.528 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.528 [2024-09-28 16:20:19.167246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.788 [2024-09-28 16:20:19.212132] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:04.788 [2024-09-28 16:20:19.212186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.788 [2024-09-28 16:20:19.212199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.788 [2024-09-28 16:20:19.212208] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.788 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.788 "name": "raid_bdev1", 00:19:04.788 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:04.788 "strip_size_kb": 0, 00:19:04.788 "state": "online", 00:19:04.788 "raid_level": "raid1", 00:19:04.788 "superblock": true, 00:19:04.788 "num_base_bdevs": 2, 00:19:04.788 "num_base_bdevs_discovered": 1, 00:19:04.788 "num_base_bdevs_operational": 1, 00:19:04.788 "base_bdevs_list": [ 00:19:04.788 { 00:19:04.788 "name": null, 00:19:04.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.788 "is_configured": false, 00:19:04.788 "data_offset": 0, 00:19:04.788 "data_size": 7936 00:19:04.789 }, 00:19:04.789 { 00:19:04.789 "name": "BaseBdev2", 00:19:04.789 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:04.789 "is_configured": true, 00:19:04.789 "data_offset": 256, 00:19:04.789 "data_size": 7936 00:19:04.789 } 00:19:04.789 ] 00:19:04.789 }' 00:19:04.789 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.789 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.048 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.048 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.048 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.048 [2024-09-28 16:20:19.666196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.048 [2024-09-28 16:20:19.666260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.048 [2024-09-28 16:20:19.666284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:05.048 [2024-09-28 16:20:19.666295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.048 [2024-09-28 16:20:19.666459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.048 [2024-09-28 16:20:19.666481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.049 [2024-09-28 16:20:19.666524] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:05.049 [2024-09-28 16:20:19.666536] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:05.049 [2024-09-28 16:20:19.666545] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:05.049 [2024-09-28 16:20:19.666564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.049 [2024-09-28 16:20:19.680673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:05.049 spare 00:19:05.049 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.049 [2024-09-28 16:20:19.682323] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.049 16:20:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.430 "name": "raid_bdev1", 00:19:06.430 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:06.430 "strip_size_kb": 0, 00:19:06.430 "state": "online", 00:19:06.430 "raid_level": "raid1", 00:19:06.430 "superblock": true, 00:19:06.430 "num_base_bdevs": 2, 00:19:06.430 "num_base_bdevs_discovered": 2, 00:19:06.430 "num_base_bdevs_operational": 2, 00:19:06.430 "process": { 00:19:06.430 "type": "rebuild", 00:19:06.430 "target": "spare", 00:19:06.430 "progress": { 00:19:06.430 "blocks": 2560, 00:19:06.430 "percent": 32 00:19:06.430 } 00:19:06.430 }, 00:19:06.430 "base_bdevs_list": [ 00:19:06.430 { 00:19:06.430 "name": "spare", 00:19:06.430 "uuid": "6a6cad2f-4d5a-58fc-a328-6df1006f2f0d", 00:19:06.430 "is_configured": true, 00:19:06.430 "data_offset": 256, 00:19:06.430 "data_size": 7936 00:19:06.430 }, 00:19:06.430 { 00:19:06.430 "name": "BaseBdev2", 00:19:06.430 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:06.430 "is_configured": true, 00:19:06.430 "data_offset": 256, 00:19:06.430 "data_size": 7936 00:19:06.430 } 00:19:06.430 ] 00:19:06.430 }' 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.430 [2024-09-28 16:20:20.829843] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.430 [2024-09-28 16:20:20.886737] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:06.430 [2024-09-28 16:20:20.886788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.430 [2024-09-28 16:20:20.886804] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.430 [2024-09-28 16:20:20.886811] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.430 "name": "raid_bdev1", 00:19:06.430 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:06.430 "strip_size_kb": 0, 00:19:06.430 "state": "online", 00:19:06.430 "raid_level": "raid1", 00:19:06.430 "superblock": true, 00:19:06.430 "num_base_bdevs": 2, 00:19:06.430 "num_base_bdevs_discovered": 1, 00:19:06.430 "num_base_bdevs_operational": 1, 00:19:06.430 "base_bdevs_list": [ 00:19:06.430 { 00:19:06.430 "name": null, 00:19:06.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.430 "is_configured": false, 00:19:06.430 "data_offset": 0, 00:19:06.430 "data_size": 7936 00:19:06.430 }, 00:19:06.430 { 00:19:06.430 "name": "BaseBdev2", 00:19:06.430 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:06.430 "is_configured": true, 00:19:06.430 "data_offset": 256, 00:19:06.430 "data_size": 7936 00:19:06.430 } 00:19:06.430 ] 00:19:06.430 }' 00:19:06.430 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.431 16:20:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.690 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.950 "name": "raid_bdev1", 00:19:06.950 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:06.950 "strip_size_kb": 0, 00:19:06.950 "state": "online", 00:19:06.950 "raid_level": "raid1", 00:19:06.950 "superblock": true, 00:19:06.950 "num_base_bdevs": 2, 00:19:06.950 "num_base_bdevs_discovered": 1, 00:19:06.950 "num_base_bdevs_operational": 1, 00:19:06.950 "base_bdevs_list": [ 00:19:06.950 { 00:19:06.950 "name": null, 00:19:06.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.950 "is_configured": false, 00:19:06.950 "data_offset": 0, 00:19:06.950 "data_size": 7936 00:19:06.950 }, 00:19:06.950 { 00:19:06.950 "name": "BaseBdev2", 00:19:06.950 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:06.950 "is_configured": true, 00:19:06.950 "data_offset": 256, 00:19:06.950 "data_size": 7936 00:19:06.950 } 00:19:06.950 ] 00:19:06.950 }' 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 [2024-09-28 16:20:21.480176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:06.950 [2024-09-28 16:20:21.480234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.950 [2024-09-28 16:20:21.480258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:06.950 [2024-09-28 16:20:21.480266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.950 [2024-09-28 16:20:21.480403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.950 [2024-09-28 16:20:21.480415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:06.950 [2024-09-28 16:20:21.480457] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:06.950 [2024-09-28 16:20:21.480470] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:06.950 [2024-09-28 16:20:21.480479] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:06.950 [2024-09-28 16:20:21.480490] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:06.950 BaseBdev1 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.950 16:20:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.891 "name": "raid_bdev1", 00:19:07.891 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:07.891 "strip_size_kb": 0, 00:19:07.891 "state": "online", 00:19:07.891 "raid_level": "raid1", 00:19:07.891 "superblock": true, 00:19:07.891 "num_base_bdevs": 2, 00:19:07.891 "num_base_bdevs_discovered": 1, 00:19:07.891 "num_base_bdevs_operational": 1, 00:19:07.891 "base_bdevs_list": [ 00:19:07.891 { 00:19:07.891 "name": null, 00:19:07.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.891 "is_configured": false, 00:19:07.891 "data_offset": 0, 00:19:07.891 "data_size": 7936 00:19:07.891 }, 00:19:07.891 { 00:19:07.891 "name": "BaseBdev2", 00:19:07.891 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:07.891 "is_configured": true, 00:19:07.891 "data_offset": 256, 00:19:07.891 "data_size": 7936 00:19:07.891 } 00:19:07.891 ] 00:19:07.891 }' 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.891 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.461 "name": "raid_bdev1", 00:19:08.461 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:08.461 "strip_size_kb": 0, 00:19:08.461 "state": "online", 00:19:08.461 "raid_level": "raid1", 00:19:08.461 "superblock": true, 00:19:08.461 "num_base_bdevs": 2, 00:19:08.461 "num_base_bdevs_discovered": 1, 00:19:08.461 "num_base_bdevs_operational": 1, 00:19:08.461 "base_bdevs_list": [ 00:19:08.461 { 00:19:08.461 "name": null, 00:19:08.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.461 "is_configured": false, 00:19:08.461 "data_offset": 0, 00:19:08.461 "data_size": 7936 00:19:08.461 }, 00:19:08.461 { 00:19:08.461 "name": "BaseBdev2", 00:19:08.461 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:08.461 "is_configured": true, 00:19:08.461 "data_offset": 256, 00:19:08.461 "data_size": 7936 00:19:08.461 } 00:19:08.461 ] 00:19:08.461 }' 00:19:08.461 16:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.461 [2024-09-28 16:20:23.081411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.461 [2024-09-28 16:20:23.081516] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.461 [2024-09-28 16:20:23.081533] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:08.461 request: 00:19:08.461 { 00:19:08.461 "base_bdev": "BaseBdev1", 00:19:08.461 "raid_bdev": "raid_bdev1", 00:19:08.461 "method": "bdev_raid_add_base_bdev", 00:19:08.461 "req_id": 1 00:19:08.461 } 00:19:08.461 Got JSON-RPC error response 00:19:08.461 response: 00:19:08.461 { 00:19:08.461 "code": -22, 00:19:08.461 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:08.461 } 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.461 16:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.843 "name": "raid_bdev1", 00:19:09.843 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:09.843 "strip_size_kb": 0, 00:19:09.843 "state": "online", 00:19:09.843 "raid_level": "raid1", 00:19:09.843 "superblock": true, 00:19:09.843 "num_base_bdevs": 2, 00:19:09.843 "num_base_bdevs_discovered": 1, 00:19:09.843 "num_base_bdevs_operational": 1, 00:19:09.843 "base_bdevs_list": [ 00:19:09.843 { 00:19:09.843 "name": null, 00:19:09.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.843 "is_configured": false, 00:19:09.843 "data_offset": 0, 00:19:09.843 "data_size": 7936 00:19:09.843 }, 00:19:09.843 { 00:19:09.843 "name": "BaseBdev2", 00:19:09.843 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:09.843 "is_configured": true, 00:19:09.843 "data_offset": 256, 00:19:09.843 "data_size": 7936 00:19:09.843 } 00:19:09.843 ] 00:19:09.843 }' 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.843 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.103 "name": "raid_bdev1", 00:19:10.103 "uuid": "89523064-4ac7-4be6-b3fd-65db8eb8cc95", 00:19:10.103 "strip_size_kb": 0, 00:19:10.103 "state": "online", 00:19:10.103 "raid_level": "raid1", 00:19:10.103 "superblock": true, 00:19:10.103 "num_base_bdevs": 2, 00:19:10.103 "num_base_bdevs_discovered": 1, 00:19:10.103 "num_base_bdevs_operational": 1, 00:19:10.103 "base_bdevs_list": [ 00:19:10.103 { 00:19:10.103 "name": null, 00:19:10.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.103 "is_configured": false, 00:19:10.103 "data_offset": 0, 00:19:10.103 "data_size": 7936 00:19:10.103 }, 00:19:10.103 { 00:19:10.103 "name": "BaseBdev2", 00:19:10.103 "uuid": "acade9b0-0b91-5a94-aadf-5ab17ee346c6", 00:19:10.103 "is_configured": true, 00:19:10.103 "data_offset": 256, 00:19:10.103 "data_size": 7936 00:19:10.103 } 00:19:10.103 ] 00:19:10.103 }' 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89053 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89053 ']' 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89053 00:19:10.103 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:10.104 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:10.104 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89053 00:19:10.104 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:10.104 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:10.104 killing process with pid 89053 00:19:10.104 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89053' 00:19:10.104 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89053 00:19:10.104 Received shutdown signal, test time was about 60.000000 seconds 00:19:10.104 00:19:10.104 Latency(us) 00:19:10.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.104 =================================================================================================================== 00:19:10.104 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.104 [2024-09-28 16:20:24.724008] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.104 [2024-09-28 16:20:24.724101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.104 [2024-09-28 16:20:24.724142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.104 [2024-09-28 16:20:24.724153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:10.104 16:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89053 00:19:10.364 [2024-09-28 16:20:25.003011] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.747 16:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:11.747 00:19:11.747 real 0m17.616s 00:19:11.747 user 0m23.075s 00:19:11.747 sys 0m1.708s 00:19:11.747 16:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.747 16:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.747 ************************************ 00:19:11.747 END TEST raid_rebuild_test_sb_md_interleaved 00:19:11.747 ************************************ 00:19:11.747 16:20:26 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:11.747 16:20:26 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:11.747 16:20:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89053 ']' 00:19:11.747 16:20:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89053 00:19:11.747 16:20:26 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:11.747 ************************************ 00:19:11.747 END TEST bdev_raid 00:19:11.747 ************************************ 00:19:11.747 00:19:11.747 real 12m8.795s 00:19:11.747 user 16m9.165s 00:19:11.747 sys 2m3.702s 00:19:11.747 16:20:26 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.747 16:20:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.747 16:20:26 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:11.747 16:20:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:11.747 16:20:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.747 16:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:11.747 ************************************ 00:19:11.747 START TEST spdkcli_raid 00:19:11.747 ************************************ 00:19:11.747 16:20:26 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:12.009 * Looking for test storage... 00:19:12.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.009 16:20:26 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:12.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.009 --rc genhtml_branch_coverage=1 00:19:12.009 --rc genhtml_function_coverage=1 00:19:12.009 --rc genhtml_legend=1 00:19:12.009 --rc geninfo_all_blocks=1 00:19:12.009 --rc geninfo_unexecuted_blocks=1 00:19:12.009 00:19:12.009 ' 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:12.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.009 --rc genhtml_branch_coverage=1 00:19:12.009 --rc genhtml_function_coverage=1 00:19:12.009 --rc genhtml_legend=1 00:19:12.009 --rc geninfo_all_blocks=1 00:19:12.009 --rc geninfo_unexecuted_blocks=1 00:19:12.009 00:19:12.009 ' 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:12.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.009 --rc genhtml_branch_coverage=1 00:19:12.009 --rc genhtml_function_coverage=1 00:19:12.009 --rc genhtml_legend=1 00:19:12.009 --rc geninfo_all_blocks=1 00:19:12.009 --rc geninfo_unexecuted_blocks=1 00:19:12.009 00:19:12.009 ' 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:12.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.009 --rc genhtml_branch_coverage=1 00:19:12.009 --rc genhtml_function_coverage=1 00:19:12.009 --rc genhtml_legend=1 00:19:12.009 --rc geninfo_all_blocks=1 00:19:12.009 --rc geninfo_unexecuted_blocks=1 00:19:12.009 00:19:12.009 ' 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:12.009 16:20:26 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89729 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:12.009 16:20:26 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89729 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 89729 ']' 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.009 16:20:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.270 [2024-09-28 16:20:26.710128] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:12.270 [2024-09-28 16:20:26.710286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89729 ] 00:19:12.270 [2024-09-28 16:20:26.878253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:12.531 [2024-09-28 16:20:27.074299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.531 [2024-09-28 16:20:27.074338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.537 16:20:27 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.537 16:20:27 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:19:13.537 16:20:27 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:13.537 16:20:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:13.537 16:20:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.537 16:20:27 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:13.537 16:20:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.537 16:20:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.537 16:20:27 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:13.537 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:13.537 ' 00:19:14.929 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:14.929 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:14.929 16:20:29 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:14.929 16:20:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:14.929 16:20:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.929 16:20:29 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:14.929 16:20:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.929 16:20:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.929 16:20:29 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:14.929 ' 00:19:16.310 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:16.310 16:20:30 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:16.310 16:20:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.310 16:20:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.311 16:20:30 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:16.311 16:20:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.311 16:20:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.311 16:20:30 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:16.311 16:20:30 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:16.880 16:20:31 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:16.881 16:20:31 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:16.881 16:20:31 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:16.881 16:20:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.881 16:20:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.881 16:20:31 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:16.881 16:20:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.881 16:20:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.881 16:20:31 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:16.881 ' 00:19:17.820 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:17.820 16:20:32 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:17.820 16:20:32 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.820 16:20:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.079 16:20:32 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:18.079 16:20:32 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.079 16:20:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.079 16:20:32 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:18.079 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:18.079 ' 00:19:19.460 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:19.460 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:19.460 16:20:34 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.460 16:20:34 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89729 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89729 ']' 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89729 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89729 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89729' 00:19:19.460 killing process with pid 89729 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 89729 00:19:19.460 16:20:34 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 89729 00:19:21.998 16:20:36 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:21.998 16:20:36 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89729 ']' 00:19:21.998 16:20:36 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89729 00:19:21.998 16:20:36 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89729 ']' 00:19:21.998 16:20:36 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89729 00:19:21.998 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89729) - No such process 00:19:21.998 16:20:36 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 89729 is not found' 00:19:21.998 Process with pid 89729 is not found 00:19:21.998 16:20:36 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:21.998 16:20:36 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:21.998 16:20:36 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:21.998 16:20:36 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:21.998 00:19:21.998 real 0m10.144s 00:19:21.998 user 0m20.593s 00:19:21.998 sys 0m1.190s 00:19:21.998 16:20:36 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.998 16:20:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.998 ************************************ 00:19:21.998 END TEST spdkcli_raid 00:19:21.998 ************************************ 00:19:21.998 16:20:36 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:21.998 16:20:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:21.998 16:20:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.998 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:19:21.998 ************************************ 00:19:21.998 START TEST blockdev_raid5f 00:19:21.998 ************************************ 00:19:21.998 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:21.998 * Looking for test storage... 00:19:21.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:21.998 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:22.258 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:19:22.258 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:22.258 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.258 16:20:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.259 16:20:36 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.259 --rc genhtml_branch_coverage=1 00:19:22.259 --rc genhtml_function_coverage=1 00:19:22.259 --rc genhtml_legend=1 00:19:22.259 --rc geninfo_all_blocks=1 00:19:22.259 --rc geninfo_unexecuted_blocks=1 00:19:22.259 00:19:22.259 ' 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.259 --rc genhtml_branch_coverage=1 00:19:22.259 --rc genhtml_function_coverage=1 00:19:22.259 --rc genhtml_legend=1 00:19:22.259 --rc geninfo_all_blocks=1 00:19:22.259 --rc geninfo_unexecuted_blocks=1 00:19:22.259 00:19:22.259 ' 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.259 --rc genhtml_branch_coverage=1 00:19:22.259 --rc genhtml_function_coverage=1 00:19:22.259 --rc genhtml_legend=1 00:19:22.259 --rc geninfo_all_blocks=1 00:19:22.259 --rc geninfo_unexecuted_blocks=1 00:19:22.259 00:19:22.259 ' 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.259 --rc genhtml_branch_coverage=1 00:19:22.259 --rc genhtml_function_coverage=1 00:19:22.259 --rc genhtml_legend=1 00:19:22.259 --rc geninfo_all_blocks=1 00:19:22.259 --rc geninfo_unexecuted_blocks=1 00:19:22.259 00:19:22.259 ' 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90009 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:22.259 16:20:36 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90009 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90009 ']' 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.259 16:20:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:22.259 [2024-09-28 16:20:36.907670] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:22.259 [2024-09-28 16:20:36.907829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90009 ] 00:19:22.519 [2024-09-28 16:20:37.075719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.778 [2024-09-28 16:20:37.270121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.353 16:20:38 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.353 16:20:38 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:19:23.353 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:23.353 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:23.353 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:23.353 16:20:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.613 Malloc0 00:19:23.613 Malloc1 00:19:23.613 Malloc2 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.613 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:23.613 16:20:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.873 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:23.873 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:23.873 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f027457a-891c-404e-ae96-4d2e28f1a3ac"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f027457a-891c-404e-ae96-4d2e28f1a3ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f027457a-891c-404e-ae96-4d2e28f1a3ac",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ea5ced93-881a-4e6b-9847-aa0bdfe8627b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "43a8b765-5462-449d-a8f3-31ec4166e8aa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "705fa194-a3a4-4539-939a-922b2c3fd1c5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:23.873 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:23.873 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:23.873 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:23.873 16:20:38 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90009 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90009 ']' 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90009 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90009 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:23.873 killing process with pid 90009 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90009' 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90009 00:19:23.873 16:20:38 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90009 00:19:26.415 16:20:40 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:26.415 16:20:40 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:26.415 16:20:40 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:26.415 16:20:40 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:26.415 16:20:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.415 ************************************ 00:19:26.415 START TEST bdev_hello_world 00:19:26.415 ************************************ 00:19:26.415 16:20:41 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:26.415 [2024-09-28 16:20:41.093901] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:26.415 [2024-09-28 16:20:41.094037] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90077 ] 00:19:26.676 [2024-09-28 16:20:41.268140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.936 [2024-09-28 16:20:41.461534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.506 [2024-09-28 16:20:41.982780] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:27.506 [2024-09-28 16:20:41.982828] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:27.506 [2024-09-28 16:20:41.982843] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:27.506 [2024-09-28 16:20:41.983315] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:27.506 [2024-09-28 16:20:41.983446] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:27.506 [2024-09-28 16:20:41.983462] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:27.506 [2024-09-28 16:20:41.983513] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:27.506 00:19:27.506 [2024-09-28 16:20:41.983529] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:28.889 00:19:28.889 real 0m2.423s 00:19:28.889 user 0m2.033s 00:19:28.889 sys 0m0.270s 00:19:28.889 16:20:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.889 ************************************ 00:19:28.889 END TEST bdev_hello_world 00:19:28.889 16:20:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:28.889 ************************************ 00:19:28.889 16:20:43 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:28.889 16:20:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:28.889 16:20:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.889 16:20:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:28.889 ************************************ 00:19:28.889 START TEST bdev_bounds 00:19:28.889 ************************************ 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90119 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:28.889 Process bdevio pid: 90119 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90119' 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90119 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90119 ']' 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.889 16:20:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:29.149 [2024-09-28 16:20:43.580664] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:29.149 [2024-09-28 16:20:43.580788] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90119 ] 00:19:29.150 [2024-09-28 16:20:43.745125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:29.409 [2024-09-28 16:20:43.941151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.409 [2024-09-28 16:20:43.941303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.409 [2024-09-28 16:20:43.941338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.978 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.978 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:29.978 16:20:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:29.978 I/O targets: 00:19:29.978 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:29.978 00:19:29.978 00:19:29.978 CUnit - A unit testing framework for C - Version 2.1-3 00:19:29.978 http://cunit.sourceforge.net/ 00:19:29.978 00:19:29.978 00:19:29.978 Suite: bdevio tests on: raid5f 00:19:29.978 Test: blockdev write read block ...passed 00:19:29.978 Test: blockdev write zeroes read block ...passed 00:19:29.978 Test: blockdev write zeroes read no split ...passed 00:19:30.239 Test: blockdev write zeroes read split ...passed 00:19:30.239 Test: blockdev write zeroes read split partial ...passed 00:19:30.239 Test: blockdev reset ...passed 00:19:30.239 Test: blockdev write read 8 blocks ...passed 00:19:30.239 Test: blockdev write read size > 128k ...passed 00:19:30.239 Test: blockdev write read invalid size ...passed 00:19:30.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:30.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:30.239 Test: blockdev write read max offset ...passed 00:19:30.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:30.239 Test: blockdev writev readv 8 blocks ...passed 00:19:30.239 Test: blockdev writev readv 30 x 1block ...passed 00:19:30.239 Test: blockdev writev readv block ...passed 00:19:30.239 Test: blockdev writev readv size > 128k ...passed 00:19:30.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:30.239 Test: blockdev comparev and writev ...passed 00:19:30.239 Test: blockdev nvme passthru rw ...passed 00:19:30.239 Test: blockdev nvme passthru vendor specific ...passed 00:19:30.239 Test: blockdev nvme admin passthru ...passed 00:19:30.239 Test: blockdev copy ...passed 00:19:30.239 00:19:30.239 Run Summary: Type Total Ran Passed Failed Inactive 00:19:30.239 suites 1 1 n/a 0 0 00:19:30.239 tests 23 23 23 0 0 00:19:30.239 asserts 130 130 130 0 n/a 00:19:30.239 00:19:30.239 Elapsed time = 0.648 seconds 00:19:30.239 0 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90119 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90119 ']' 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90119 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90119 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.239 killing process with pid 90119 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90119' 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90119 00:19:30.239 16:20:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90119 00:19:32.149 16:20:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:32.149 00:19:32.149 real 0m2.843s 00:19:32.150 user 0m6.693s 00:19:32.150 sys 0m0.391s 00:19:32.150 16:20:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.150 16:20:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:32.150 ************************************ 00:19:32.150 END TEST bdev_bounds 00:19:32.150 ************************************ 00:19:32.150 16:20:46 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:32.150 16:20:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:32.150 16:20:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.150 16:20:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:32.150 ************************************ 00:19:32.150 START TEST bdev_nbd 00:19:32.150 ************************************ 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90179 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90179 /var/tmp/spdk-nbd.sock 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90179 ']' 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.150 16:20:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:32.150 [2024-09-28 16:20:46.503174] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:32.150 [2024-09-28 16:20:46.503300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.150 [2024-09-28 16:20:46.667779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.410 [2024-09-28 16:20:46.854485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:32.981 1+0 records in 00:19:32.981 1+0 records out 00:19:32.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433807 s, 9.4 MB/s 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:32.981 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:33.241 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:33.241 { 00:19:33.241 "nbd_device": "/dev/nbd0", 00:19:33.241 "bdev_name": "raid5f" 00:19:33.241 } 00:19:33.241 ]' 00:19:33.241 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:33.241 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:33.241 { 00:19:33.241 "nbd_device": "/dev/nbd0", 00:19:33.241 "bdev_name": "raid5f" 00:19:33.241 } 00:19:33.241 ]' 00:19:33.241 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:33.501 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:33.501 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.501 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:33.501 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.501 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:33.501 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.501 16:20:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.501 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.761 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:34.022 /dev/nbd0 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.022 1+0 records in 00:19:34.022 1+0 records out 00:19:34.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041593 s, 9.8 MB/s 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.022 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:34.282 { 00:19:34.282 "nbd_device": "/dev/nbd0", 00:19:34.282 "bdev_name": "raid5f" 00:19:34.282 } 00:19:34.282 ]' 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:34.282 { 00:19:34.282 "nbd_device": "/dev/nbd0", 00:19:34.282 "bdev_name": "raid5f" 00:19:34.282 } 00:19:34.282 ]' 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:34.282 256+0 records in 00:19:34.282 256+0 records out 00:19:34.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619574 s, 169 MB/s 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:34.282 256+0 records in 00:19:34.282 256+0 records out 00:19:34.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298826 s, 35.1 MB/s 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:34.282 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.543 16:20:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.543 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:34.802 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:35.062 malloc_lvol_verify 00:19:35.062 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:35.321 f8171683-3c3a-4d67-9224-066dd301df3a 00:19:35.321 16:20:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:35.581 7d4d5912-1ca3-4608-b58f-d41d9478da29 00:19:35.581 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:35.581 /dev/nbd0 00:19:35.581 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:35.581 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:35.581 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:35.581 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:35.581 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:35.581 mke2fs 1.47.0 (5-Feb-2023) 00:19:35.841 Discarding device blocks: 0/4096 done 00:19:35.841 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:35.841 00:19:35.841 Allocating group tables: 0/1 done 00:19:35.841 Writing inode tables: 0/1 done 00:19:35.841 Creating journal (1024 blocks): done 00:19:35.841 Writing superblocks and filesystem accounting information: 0/1 done 00:19:35.841 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90179 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90179 ']' 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90179 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.841 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90179 00:19:36.101 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.101 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.101 killing process with pid 90179 00:19:36.101 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90179' 00:19:36.101 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90179 00:19:36.101 16:20:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90179 00:19:37.483 16:20:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:37.483 00:19:37.483 real 0m5.641s 00:19:37.483 user 0m7.514s 00:19:37.483 sys 0m1.350s 00:19:37.483 16:20:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:37.483 16:20:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:37.483 ************************************ 00:19:37.483 END TEST bdev_nbd 00:19:37.483 ************************************ 00:19:37.483 16:20:52 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:37.483 16:20:52 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:37.483 16:20:52 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:37.483 16:20:52 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:37.483 16:20:52 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:37.483 16:20:52 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:37.483 16:20:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.483 ************************************ 00:19:37.483 START TEST bdev_fio 00:19:37.483 ************************************ 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:37.483 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:37.483 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:37.744 ************************************ 00:19:37.744 START TEST bdev_fio_rw_verify 00:19:37.744 ************************************ 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:37.744 16:20:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:38.004 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:38.004 fio-3.35 00:19:38.004 Starting 1 thread 00:19:50.226 00:19:50.226 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90386: Sat Sep 28 16:21:03 2024 00:19:50.226 read: IOPS=12.4k, BW=48.3MiB/s (50.7MB/s)(483MiB/10001msec) 00:19:50.227 slat (usec): min=16, max=127, avg=18.92, stdev= 2.06 00:19:50.227 clat (usec): min=10, max=307, avg=129.46, stdev=44.84 00:19:50.227 lat (usec): min=29, max=327, avg=148.38, stdev=45.05 00:19:50.227 clat percentiles (usec): 00:19:50.227 | 50.000th=[ 133], 99.000th=[ 212], 99.900th=[ 231], 99.990th=[ 262], 00:19:50.227 | 99.999th=[ 293] 00:19:50.227 write: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(501MiB/9875msec); 0 zone resets 00:19:50.227 slat (usec): min=7, max=339, avg=16.19, stdev= 3.86 00:19:50.227 clat (usec): min=57, max=1098, avg=298.64, stdev=39.20 00:19:50.227 lat (usec): min=71, max=1360, avg=314.83, stdev=40.12 00:19:50.227 clat percentiles (usec): 00:19:50.227 | 50.000th=[ 302], 99.000th=[ 371], 99.900th=[ 586], 99.990th=[ 947], 00:19:50.227 | 99.999th=[ 1045] 00:19:50.227 bw ( KiB/s): min=48728, max=54616, per=98.80%, avg=51341.42, stdev=1469.48, samples=19 00:19:50.227 iops : min=12182, max=13654, avg=12835.32, stdev=367.35, samples=19 00:19:50.227 lat (usec) : 20=0.01%, 50=0.01%, 100=15.25%, 250=39.25%, 500=45.43% 00:19:50.227 lat (usec) : 750=0.05%, 1000=0.02% 00:19:50.227 lat (msec) : 2=0.01% 00:19:50.227 cpu : usr=98.77%, sys=0.57%, ctx=35, majf=0, minf=10144 00:19:50.227 IO depths : 1=7.6%, 2=19.8%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.227 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.227 issued rwts: total=123676,128290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.227 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:50.227 00:19:50.227 Run status group 0 (all jobs): 00:19:50.227 READ: bw=48.3MiB/s (50.7MB/s), 48.3MiB/s-48.3MiB/s (50.7MB/s-50.7MB/s), io=483MiB (507MB), run=10001-10001msec 00:19:50.227 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=501MiB (525MB), run=9875-9875msec 00:19:50.488 ----------------------------------------------------- 00:19:50.488 Suppressions used: 00:19:50.488 count bytes template 00:19:50.488 1 7 /usr/src/fio/parse.c 00:19:50.488 769 73824 /usr/src/fio/iolog.c 00:19:50.488 1 8 libtcmalloc_minimal.so 00:19:50.488 1 904 libcrypto.so 00:19:50.488 ----------------------------------------------------- 00:19:50.488 00:19:50.488 00:19:50.488 real 0m12.667s 00:19:50.488 user 0m12.932s 00:19:50.488 sys 0m0.757s 00:19:50.488 16:21:04 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.488 16:21:04 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:50.488 ************************************ 00:19:50.488 END TEST bdev_fio_rw_verify 00:19:50.488 ************************************ 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f027457a-891c-404e-ae96-4d2e28f1a3ac"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f027457a-891c-404e-ae96-4d2e28f1a3ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f027457a-891c-404e-ae96-4d2e28f1a3ac",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ea5ced93-881a-4e6b-9847-aa0bdfe8627b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "43a8b765-5462-449d-a8f3-31ec4166e8aa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "705fa194-a3a4-4539-939a-922b2c3fd1c5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.488 /home/vagrant/spdk_repo/spdk 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:50.488 00:19:50.488 real 0m12.969s 00:19:50.488 user 0m13.059s 00:19:50.488 sys 0m0.909s 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.488 16:21:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:50.488 ************************************ 00:19:50.488 END TEST bdev_fio 00:19:50.488 ************************************ 00:19:50.488 16:21:05 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:50.488 16:21:05 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:50.488 16:21:05 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:50.488 16:21:05 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.488 16:21:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.488 ************************************ 00:19:50.488 START TEST bdev_verify 00:19:50.488 ************************************ 00:19:50.488 16:21:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:50.749 [2024-09-28 16:21:05.248949] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:50.749 [2024-09-28 16:21:05.249095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90552 ] 00:19:50.749 [2024-09-28 16:21:05.419140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.009 [2024-09-28 16:21:05.617812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.009 [2024-09-28 16:21:05.617863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.578 Running I/O for 5 seconds... 00:19:56.763 13435.00 IOPS, 52.48 MiB/s 12258.50 IOPS, 47.88 MiB/s 11898.67 IOPS, 46.48 MiB/s 11694.50 IOPS, 45.68 MiB/s 11591.40 IOPS, 45.28 MiB/s 00:19:56.763 Latency(us) 00:19:56.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.763 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.763 Verification LBA range: start 0x0 length 0x2000 00:19:56.763 raid5f : 5.02 4815.30 18.81 0.00 0.00 39880.10 211.06 32281.49 00:19:56.763 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.763 Verification LBA range: start 0x2000 length 0x2000 00:19:56.763 raid5f : 5.02 6748.10 26.36 0.00 0.00 28555.69 128.78 41668.30 00:19:56.763 =================================================================================================================== 00:19:56.763 Total : 11563.40 45.17 0.00 0.00 33271.59 128.78 41668.30 00:19:58.162 00:19:58.162 real 0m7.454s 00:19:58.162 user 0m13.584s 00:19:58.162 sys 0m0.268s 00:19:58.162 16:21:12 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.162 16:21:12 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:58.162 ************************************ 00:19:58.162 END TEST bdev_verify 00:19:58.162 ************************************ 00:19:58.162 16:21:12 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:58.162 16:21:12 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:58.162 16:21:12 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.162 16:21:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:58.162 ************************************ 00:19:58.162 START TEST bdev_verify_big_io 00:19:58.162 ************************************ 00:19:58.162 16:21:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:58.162 [2024-09-28 16:21:12.770190] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:58.162 [2024-09-28 16:21:12.770348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90651 ] 00:19:58.422 [2024-09-28 16:21:12.939455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:58.682 [2024-09-28 16:21:13.137058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.682 [2024-09-28 16:21:13.137107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.251 Running I/O for 5 seconds... 00:20:04.350 633.00 IOPS, 39.56 MiB/s 761.00 IOPS, 47.56 MiB/s 803.00 IOPS, 50.19 MiB/s 793.25 IOPS, 49.58 MiB/s 812.40 IOPS, 50.77 MiB/s 00:20:04.350 Latency(us) 00:20:04.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.350 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.350 Verification LBA range: start 0x0 length 0x200 00:20:04.350 raid5f : 5.34 356.51 22.28 0.00 0.00 8905486.11 195.86 375472.63 00:20:04.350 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.350 Verification LBA range: start 0x200 length 0x200 00:20:04.350 raid5f : 5.29 456.00 28.50 0.00 0.00 7043939.77 157.40 298546.53 00:20:04.350 =================================================================================================================== 00:20:04.350 Total : 812.52 50.78 0.00 0.00 7865108.68 157.40 375472.63 00:20:06.255 00:20:06.255 real 0m7.797s 00:20:06.255 user 0m14.252s 00:20:06.255 sys 0m0.290s 00:20:06.255 16:21:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.255 16:21:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.255 ************************************ 00:20:06.255 END TEST bdev_verify_big_io 00:20:06.255 ************************************ 00:20:06.255 16:21:20 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.255 16:21:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:06.255 16:21:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.255 16:21:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:06.255 ************************************ 00:20:06.255 START TEST bdev_write_zeroes 00:20:06.255 ************************************ 00:20:06.256 16:21:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.256 [2024-09-28 16:21:20.645331] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:06.256 [2024-09-28 16:21:20.645478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90750 ] 00:20:06.256 [2024-09-28 16:21:20.813122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.515 [2024-09-28 16:21:21.008989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.083 Running I/O for 1 seconds... 00:20:08.021 30543.00 IOPS, 119.31 MiB/s 00:20:08.021 Latency(us) 00:20:08.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.021 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.021 raid5f : 1.01 30527.40 119.25 0.00 0.00 4181.69 1201.97 5723.67 00:20:08.021 =================================================================================================================== 00:20:08.021 Total : 30527.40 119.25 0.00 0.00 4181.69 1201.97 5723.67 00:20:09.402 00:20:09.402 real 0m3.420s 00:20:09.402 user 0m3.014s 00:20:09.402 sys 0m0.281s 00:20:09.402 16:21:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.402 16:21:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:09.402 ************************************ 00:20:09.402 END TEST bdev_write_zeroes 00:20:09.402 ************************************ 00:20:09.402 16:21:24 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:09.402 16:21:24 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:09.402 16:21:24 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:09.402 16:21:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:09.402 ************************************ 00:20:09.402 START TEST bdev_json_nonenclosed 00:20:09.402 ************************************ 00:20:09.402 16:21:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:09.662 [2024-09-28 16:21:24.138850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:09.662 [2024-09-28 16:21:24.138989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90809 ] 00:20:09.662 [2024-09-28 16:21:24.309312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.968 [2024-09-28 16:21:24.502155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.968 [2024-09-28 16:21:24.502260] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:09.968 [2024-09-28 16:21:24.502284] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:09.968 [2024-09-28 16:21:24.502294] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.228 00:20:10.228 real 0m0.850s 00:20:10.228 user 0m0.592s 00:20:10.228 sys 0m0.153s 00:20:10.228 16:21:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.228 16:21:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:10.228 ************************************ 00:20:10.228 END TEST bdev_json_nonenclosed 00:20:10.228 ************************************ 00:20:10.488 16:21:24 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.488 16:21:24 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:10.488 16:21:24 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.488 16:21:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.488 ************************************ 00:20:10.488 START TEST bdev_json_nonarray 00:20:10.488 ************************************ 00:20:10.488 16:21:24 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.488 [2024-09-28 16:21:25.057463] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:10.488 [2024-09-28 16:21:25.057568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90840 ] 00:20:10.748 [2024-09-28 16:21:25.220533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.748 [2024-09-28 16:21:25.425562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.748 [2024-09-28 16:21:25.425665] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:10.748 [2024-09-28 16:21:25.425689] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:10.748 [2024-09-28 16:21:25.425699] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:11.319 00:20:11.319 real 0m0.844s 00:20:11.319 user 0m0.586s 00:20:11.319 sys 0m0.153s 00:20:11.319 16:21:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.319 16:21:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:11.319 ************************************ 00:20:11.319 END TEST bdev_json_nonarray 00:20:11.319 ************************************ 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:11.319 16:21:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:11.319 00:20:11.319 real 0m49.333s 00:20:11.319 user 1m5.773s 00:20:11.319 sys 0m5.212s 00:20:11.319 16:21:25 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.319 16:21:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:11.319 ************************************ 00:20:11.319 END TEST blockdev_raid5f 00:20:11.319 ************************************ 00:20:11.319 16:21:25 -- spdk/autotest.sh@194 -- # uname -s 00:20:11.319 16:21:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:11.319 16:21:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:11.319 16:21:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:11.319 16:21:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:11.319 16:21:25 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:11.319 16:21:25 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:11.319 16:21:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.319 16:21:25 -- common/autotest_common.sh@10 -- # set +x 00:20:11.579 16:21:26 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:11.579 16:21:26 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:11.579 16:21:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:11.579 16:21:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:11.579 16:21:26 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:11.579 16:21:26 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:11.579 16:21:26 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:11.579 16:21:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.579 16:21:26 -- common/autotest_common.sh@10 -- # set +x 00:20:11.579 16:21:26 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:11.579 16:21:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:11.579 16:21:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:11.579 16:21:26 -- common/autotest_common.sh@10 -- # set +x 00:20:14.121 INFO: APP EXITING 00:20:14.121 INFO: killing all VMs 00:20:14.121 INFO: killing vhost app 00:20:14.121 INFO: EXIT DONE 00:20:14.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:14.381 Waiting for block devices as requested 00:20:14.381 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:14.381 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:15.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.322 Cleaning 00:20:15.322 Removing: /var/run/dpdk/spdk0/config 00:20:15.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:15.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:15.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:15.322 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:15.322 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:15.322 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:15.322 Removing: /dev/shm/spdk_tgt_trace.pid56793 00:20:15.322 Removing: /var/run/dpdk/spdk0 00:20:15.322 Removing: /var/run/dpdk/spdk_pid56563 00:20:15.322 Removing: /var/run/dpdk/spdk_pid56793 00:20:15.322 Removing: /var/run/dpdk/spdk_pid57027 00:20:15.322 Removing: /var/run/dpdk/spdk_pid57137 00:20:15.322 Removing: /var/run/dpdk/spdk_pid57193 00:20:15.322 Removing: /var/run/dpdk/spdk_pid57327 00:20:15.322 Removing: /var/run/dpdk/spdk_pid57350 00:20:15.322 Removing: /var/run/dpdk/spdk_pid57566 00:20:15.582 Removing: /var/run/dpdk/spdk_pid57679 00:20:15.582 Removing: /var/run/dpdk/spdk_pid57792 00:20:15.582 Removing: /var/run/dpdk/spdk_pid57919 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58033 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58078 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58120 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58196 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58313 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58759 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58835 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58915 00:20:15.582 Removing: /var/run/dpdk/spdk_pid58931 00:20:15.582 Removing: /var/run/dpdk/spdk_pid59093 00:20:15.582 Removing: /var/run/dpdk/spdk_pid59109 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59276 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59292 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59367 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59390 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59460 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59478 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59684 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59726 00:20:15.583 Removing: /var/run/dpdk/spdk_pid59815 00:20:15.583 Removing: /var/run/dpdk/spdk_pid61188 00:20:15.583 Removing: /var/run/dpdk/spdk_pid61405 00:20:15.583 Removing: /var/run/dpdk/spdk_pid61545 00:20:15.583 Removing: /var/run/dpdk/spdk_pid62194 00:20:15.583 Removing: /var/run/dpdk/spdk_pid62411 00:20:15.583 Removing: /var/run/dpdk/spdk_pid62551 00:20:15.583 Removing: /var/run/dpdk/spdk_pid63200 00:20:15.583 Removing: /var/run/dpdk/spdk_pid63530 00:20:15.583 Removing: /var/run/dpdk/spdk_pid63680 00:20:15.583 Removing: /var/run/dpdk/spdk_pid65073 00:20:15.583 Removing: /var/run/dpdk/spdk_pid65326 00:20:15.583 Removing: /var/run/dpdk/spdk_pid65478 00:20:15.583 Removing: /var/run/dpdk/spdk_pid66868 00:20:15.583 Removing: /var/run/dpdk/spdk_pid67127 00:20:15.583 Removing: /var/run/dpdk/spdk_pid67278 00:20:15.583 Removing: /var/run/dpdk/spdk_pid68669 00:20:15.583 Removing: /var/run/dpdk/spdk_pid69114 00:20:15.583 Removing: /var/run/dpdk/spdk_pid69260 00:20:15.583 Removing: /var/run/dpdk/spdk_pid70752 00:20:15.583 Removing: /var/run/dpdk/spdk_pid71022 00:20:15.583 Removing: /var/run/dpdk/spdk_pid71164 00:20:15.583 Removing: /var/run/dpdk/spdk_pid72658 00:20:15.583 Removing: /var/run/dpdk/spdk_pid72923 00:20:15.583 Removing: /var/run/dpdk/spdk_pid73074 00:20:15.583 Removing: /var/run/dpdk/spdk_pid74566 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75053 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75204 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75348 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75768 00:20:15.583 Removing: /var/run/dpdk/spdk_pid76498 00:20:15.583 Removing: /var/run/dpdk/spdk_pid76874 00:20:15.843 Removing: /var/run/dpdk/spdk_pid77568 00:20:15.843 Removing: /var/run/dpdk/spdk_pid78003 00:20:15.843 Removing: /var/run/dpdk/spdk_pid78763 00:20:15.843 Removing: /var/run/dpdk/spdk_pid79172 00:20:15.843 Removing: /var/run/dpdk/spdk_pid81135 00:20:15.843 Removing: /var/run/dpdk/spdk_pid81576 00:20:15.843 Removing: /var/run/dpdk/spdk_pid82016 00:20:15.843 Removing: /var/run/dpdk/spdk_pid84119 00:20:15.843 Removing: /var/run/dpdk/spdk_pid84600 00:20:15.843 Removing: /var/run/dpdk/spdk_pid85122 00:20:15.843 Removing: /var/run/dpdk/spdk_pid86190 00:20:15.843 Removing: /var/run/dpdk/spdk_pid86518 00:20:15.843 Removing: /var/run/dpdk/spdk_pid87457 00:20:15.843 Removing: /var/run/dpdk/spdk_pid87781 00:20:15.843 Removing: /var/run/dpdk/spdk_pid88729 00:20:15.843 Removing: /var/run/dpdk/spdk_pid89053 00:20:15.843 Removing: /var/run/dpdk/spdk_pid89729 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90009 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90077 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90119 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90375 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90552 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90651 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90750 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90809 00:20:15.843 Removing: /var/run/dpdk/spdk_pid90840 00:20:15.843 Clean 00:20:15.843 16:21:30 -- common/autotest_common.sh@1451 -- # return 0 00:20:15.843 16:21:30 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:20:15.843 16:21:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.843 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.843 16:21:30 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:20:15.843 16:21:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.843 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:20:16.103 16:21:30 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:16.103 16:21:30 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:16.103 16:21:30 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:16.103 16:21:30 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:20:16.103 16:21:30 -- spdk/autotest.sh@394 -- # hostname 00:20:16.103 16:21:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:16.363 geninfo: WARNING: invalid characters removed from testname! 00:20:42.933 16:21:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:43.193 16:21:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:45.100 16:21:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:47.008 16:22:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:48.914 16:22:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:50.822 16:22:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:53.359 16:22:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:53.359 16:22:07 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:20:53.359 16:22:07 -- common/autotest_common.sh@1681 -- $ lcov --version 00:20:53.359 16:22:07 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:20:53.359 16:22:07 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:20:53.359 16:22:07 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:20:53.359 16:22:07 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:20:53.359 16:22:07 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:20:53.359 16:22:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:53.359 16:22:07 -- scripts/common.sh@336 -- $ read -ra ver1 00:20:53.359 16:22:07 -- scripts/common.sh@337 -- $ IFS=.-: 00:20:53.359 16:22:07 -- scripts/common.sh@337 -- $ read -ra ver2 00:20:53.359 16:22:07 -- scripts/common.sh@338 -- $ local 'op=<' 00:20:53.359 16:22:07 -- scripts/common.sh@340 -- $ ver1_l=2 00:20:53.359 16:22:07 -- scripts/common.sh@341 -- $ ver2_l=1 00:20:53.359 16:22:07 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:20:53.359 16:22:07 -- scripts/common.sh@344 -- $ case "$op" in 00:20:53.359 16:22:07 -- scripts/common.sh@345 -- $ : 1 00:20:53.359 16:22:07 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:20:53.359 16:22:07 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.359 16:22:07 -- scripts/common.sh@365 -- $ decimal 1 00:20:53.359 16:22:07 -- scripts/common.sh@353 -- $ local d=1 00:20:53.359 16:22:07 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:53.359 16:22:07 -- scripts/common.sh@355 -- $ echo 1 00:20:53.359 16:22:07 -- scripts/common.sh@365 -- $ ver1[v]=1 00:20:53.359 16:22:07 -- scripts/common.sh@366 -- $ decimal 2 00:20:53.359 16:22:07 -- scripts/common.sh@353 -- $ local d=2 00:20:53.359 16:22:07 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:53.359 16:22:07 -- scripts/common.sh@355 -- $ echo 2 00:20:53.359 16:22:07 -- scripts/common.sh@366 -- $ ver2[v]=2 00:20:53.359 16:22:07 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:20:53.359 16:22:07 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:20:53.359 16:22:07 -- scripts/common.sh@368 -- $ return 0 00:20:53.359 16:22:07 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.359 16:22:07 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:20:53.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.359 --rc genhtml_branch_coverage=1 00:20:53.359 --rc genhtml_function_coverage=1 00:20:53.359 --rc genhtml_legend=1 00:20:53.359 --rc geninfo_all_blocks=1 00:20:53.359 --rc geninfo_unexecuted_blocks=1 00:20:53.359 00:20:53.359 ' 00:20:53.359 16:22:07 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:20:53.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.359 --rc genhtml_branch_coverage=1 00:20:53.359 --rc genhtml_function_coverage=1 00:20:53.359 --rc genhtml_legend=1 00:20:53.359 --rc geninfo_all_blocks=1 00:20:53.359 --rc geninfo_unexecuted_blocks=1 00:20:53.359 00:20:53.359 ' 00:20:53.359 16:22:07 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:20:53.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.359 --rc genhtml_branch_coverage=1 00:20:53.359 --rc genhtml_function_coverage=1 00:20:53.359 --rc genhtml_legend=1 00:20:53.359 --rc geninfo_all_blocks=1 00:20:53.359 --rc geninfo_unexecuted_blocks=1 00:20:53.359 00:20:53.359 ' 00:20:53.359 16:22:07 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:20:53.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.359 --rc genhtml_branch_coverage=1 00:20:53.359 --rc genhtml_function_coverage=1 00:20:53.359 --rc genhtml_legend=1 00:20:53.359 --rc geninfo_all_blocks=1 00:20:53.359 --rc geninfo_unexecuted_blocks=1 00:20:53.359 00:20:53.359 ' 00:20:53.359 16:22:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.359 16:22:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:53.359 16:22:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:53.359 16:22:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.359 16:22:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.359 16:22:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.359 16:22:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.359 16:22:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.359 16:22:07 -- paths/export.sh@5 -- $ export PATH 00:20:53.359 16:22:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.359 16:22:07 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:53.359 16:22:07 -- common/autobuild_common.sh@479 -- $ date +%s 00:20:53.359 16:22:07 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727540527.XXXXXX 00:20:53.359 16:22:07 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727540527.dBDIRi 00:20:53.359 16:22:07 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:20:53.359 16:22:07 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:20:53.359 16:22:07 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:53.359 16:22:07 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:53.359 16:22:07 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:53.359 16:22:07 -- common/autobuild_common.sh@495 -- $ get_config_params 00:20:53.359 16:22:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:53.359 16:22:07 -- common/autotest_common.sh@10 -- $ set +x 00:20:53.359 16:22:07 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:53.359 16:22:07 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:20:53.359 16:22:07 -- pm/common@17 -- $ local monitor 00:20:53.359 16:22:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:53.359 16:22:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:53.359 16:22:07 -- pm/common@25 -- $ sleep 1 00:20:53.359 16:22:07 -- pm/common@21 -- $ date +%s 00:20:53.360 16:22:07 -- pm/common@21 -- $ date +%s 00:20:53.360 16:22:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727540527 00:20:53.360 16:22:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727540527 00:20:53.360 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727540527_collect-cpu-load.pm.log 00:20:53.360 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727540527_collect-vmstat.pm.log 00:20:54.345 16:22:08 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:20:54.345 16:22:08 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:20:54.345 16:22:08 -- spdk/autopackage.sh@14 -- $ timing_finish 00:20:54.345 16:22:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:54.345 16:22:08 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:54.346 16:22:08 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:54.346 16:22:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:54.346 16:22:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:54.346 16:22:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:54.346 16:22:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:54.346 16:22:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:54.346 16:22:08 -- pm/common@44 -- $ pid=92353 00:20:54.346 16:22:08 -- pm/common@50 -- $ kill -TERM 92353 00:20:54.346 16:22:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:54.346 16:22:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:54.346 16:22:08 -- pm/common@44 -- $ pid=92355 00:20:54.346 16:22:08 -- pm/common@50 -- $ kill -TERM 92355 00:20:54.346 + [[ -n 5432 ]] 00:20:54.346 + sudo kill 5432 00:20:54.365 [Pipeline] } 00:20:54.380 [Pipeline] // timeout 00:20:54.385 [Pipeline] } 00:20:54.398 [Pipeline] // stage 00:20:54.403 [Pipeline] } 00:20:54.416 [Pipeline] // catchError 00:20:54.425 [Pipeline] stage 00:20:54.427 [Pipeline] { (Stop VM) 00:20:54.438 [Pipeline] sh 00:20:54.722 + vagrant halt 00:20:57.283 ==> default: Halting domain... 00:21:05.425 [Pipeline] sh 00:21:05.707 + vagrant destroy -f 00:21:08.247 ==> default: Removing domain... 00:21:08.261 [Pipeline] sh 00:21:08.547 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:08.558 [Pipeline] } 00:21:08.574 [Pipeline] // stage 00:21:08.581 [Pipeline] } 00:21:08.595 [Pipeline] // dir 00:21:08.602 [Pipeline] } 00:21:08.617 [Pipeline] // wrap 00:21:08.624 [Pipeline] } 00:21:08.638 [Pipeline] // catchError 00:21:08.648 [Pipeline] stage 00:21:08.650 [Pipeline] { (Epilogue) 00:21:08.665 [Pipeline] sh 00:21:08.953 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:13.160 [Pipeline] catchError 00:21:13.161 [Pipeline] { 00:21:13.172 [Pipeline] sh 00:21:13.455 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:13.455 Artifacts sizes are good 00:21:13.465 [Pipeline] } 00:21:13.478 [Pipeline] // catchError 00:21:13.487 [Pipeline] archiveArtifacts 00:21:13.493 Archiving artifacts 00:21:13.606 [Pipeline] cleanWs 00:21:13.617 [WS-CLEANUP] Deleting project workspace... 00:21:13.617 [WS-CLEANUP] Deferred wipeout is used... 00:21:13.624 [WS-CLEANUP] done 00:21:13.625 [Pipeline] } 00:21:13.639 [Pipeline] // stage 00:21:13.643 [Pipeline] } 00:21:13.656 [Pipeline] // node 00:21:13.660 [Pipeline] End of Pipeline 00:21:13.701 Finished: SUCCESS